Sam Altman wonders: Could the government nationalize artificial general intelligence?
“It has seemed to me for a long time it might be better if building AGI were a government project,” Sam Altman publicly mused this past Saturday evening.
The future was on everyone’s mind as the OpenAI CEO answered questions about OpenAI’s new contract with the United States Department of Defense. But Altman also speculated on the future shape of the AI industry itself, and even the possibility of the government “nationalizing” private AI companies into a public project, admitting more than once he’s wondered what would happen next. “I obviously don’t know,” Altman said — but he added that “I have thought about it, of course…” Altman’s speculation hedged that “It doesn’t seem super likely on the current trajectory.”
“That said, I do think a close partnership between governments and the companies building this technology is super important.”
Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine’s AI editor points out that “many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed.” And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate “critical and strategic” goods for which businesses must accept the government’s contracts. Fortune speculates this would’ve been “a sort of soft nationalization of Anthropic’s production pipeline”.
Altman acknowledged Saturday that he’d felt the threat of attempted nationalization “behind a lot of the questions” he’d received on X (formerly known as Twitter).
But if hard conversations have started happening about the future, this week saw signs that everyone’s carefully considering their role — from the companies building the software stacks to the developers using their tools. How exactly will this AI build-out be handled — and how should AI companies be working with the government?
In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer broached an AGI-government scenario with OpenAI’s Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the DoD?
“No,” Mulligan answered. At our current moment in time, “We control which models we deploy…”
Feelings made known
That crucial control-of-the-model took center stage Friday when IT workers building AI tools decided to make their voices heard on Anthropic’s showdown with the DoD. Some 100 OpenAI employees joined with 856 Google employees in an online letter titled “We Will Not Be Divided” that urges their bosses to refuse their models’ use in domestic mass surveillance and autonomously killing without human oversight.
And thousands of developers using AI tools realized they could make their own feelings known just by voting with their phones — uninstalling OpenAI’s ChatGPT software while installing Anthropic’s in a show of support — ultimately propelling Claude to the #1 spot in Apple’s App Store.

On Wednesday, CNN even reported that someone had scrawled messages on the sidewalk outside OpenAI’s office in San Francisco, asking “Where are your redlines?” and “What are the safeguards?”
But this week also served as a reminder that Anthropic’s software is already being used by the U.S. military for other objectives, including target identification, battle-simulation, and intelligence assessments.
“Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool,” The Wall Street Journal reported Saturday.

So it was also interesting to see how Altman characterized the government’s view of today’s private AI companies. Speaking of the Defense Department, Altman wrote on X that “Our industry tells them ‘The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind.’
“And then we say ‘But we won’t help you, and we think you are kind of evil.’ I don’t think I’d react great in that situation.”
Even as some of the tool’s users start taking sides, this week the Defense Department seemed to be doing some damage control, announcing its policy and operational communities would convene a new working group with leaders from frontier AI labs and cloud providers (according to an announcement from OpenAI).
Monday, Axios reported Defense Department officials had been worried Anthropic’s very public rejection of their contract last week could “poison the well for future engagement with AI companies.”

Even as OpenAI leaders were insisting Saturday that their new contract would restrict the Pentagon to only “lawful purposes”, skeptical users were also contributing “added context” suggesting a loophole could allow (lawful) mass surveillance or (lawful) autonomous weapons systems…
So Monday, OpenAI posted that the Defense Department had even agreed to update its contract to state explicitly that:
- OpenAI’s system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
- OpenAI’s system “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.”
Surveilling non-domestic persons
How would users of ChatGPT feel knowing the software was also being used to surveil non-domestic persons in the United States? Maybe they’d feel better knowing those concerns are shared by the executive closest to their development.

“I think it is very important that society thinks through the consequences of this…” Altman posted.
But Altman added firmly that OpenAI would never do mass domestic surveillance, even if the government said it was legal, “because it violates the constitution… What would we do if there were a constitutional amendment that made it legal? Maybe I would quit my job…”
“I am terrified of a world where AI companies act like they have more power than the government. I would also be terrified of a world where our government decided mass domestic surveillance was ok. I don’t know how I’d come to work every day if that were the state of the country/Constitution.”

For developers who care about privacy, Altman’s Q&A also brought an assurance that your ChatGPT data won’t be accessed by the government. When asked if OpenAI will make sure user data is secure, or if the DoD would be able to check anyone’s unflagged messages, Altman responded, “Yes [the data is secure]. They will absolutely not be able to do that.”
And for people still concerned about how the military would use the software, OpenAI programmer Boaz Barak was also answering questions on X, sharing a reminder about OpenAI’s years of experience with embedding “red lines” in all their models “for other high consequence risks… such as bioweaponization and cyberabuse (you can see our system cards for a lot more detail).”
Despite his closeness to OpenAI’s product, X users caught a glimpse of Barak’s own fears for the future. “[I]f it was up to me, we would just wait a bit on deploying AI in the national security sector, and cut our teeth in the commercial one,” Barak posted. “Let us figure out safety and alignment in the open…”

The populace of tool users and the broader public may also have a role, Barak ultimately suggested — since someone’s going to have to urge lawmakers to act. “AI poses unique risks to our freedoms that can’t be left to individual agencies and companies. We desperately need regulation and legislation to ensure our freedoms.”
Setting a standard?
OpenAI tried to offer specifics, articulating what may become standards for all IT workers tasked with engineering trust with sensitive new military clients. Mulligan explained on LinkedIn that “Deployment architecture matters more than contract language.” (For example, “By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”) And the forward-deployed engineers give real “visibility” into the system’s usage — a kind of “trust but verify” model — along with the ability to make in-the-field adjustments.
“If our team sees that our models aren’t refusing queries, they should, or there’s more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could,” Mulligan writes in the LinkedIn post.
Another user put this question to Altman: “Will OpenAI commit to publishing every future red line change with a public explanation and a mandatory notice period before it takes effect, the way regulated industries handle material policy amendments?”
“Yes, this seems like a very good idea. I will talk to the team,” was his response.
In fact, maybe concerns about that “all lawful uses” loophole show the folly of relying solely on an executive’s lawfulness or a contract’s usage policy, Mulligan suggested on LinkedIn. “Any responsible deployment of AI in classified environments should involve layered safeguards, including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. Those are exactly the layered safeguards we negotiated in our deal.”

Or, as OpenAI Programmer Boaz Barak posted on X, “I think looking at this as a lawyer is the wrong way to go about it. If the only protection you have in place is a usage policy, then you’ve already lost.”
Taking “layered safeguards” into consideration, Mulligan posted on LinkedIn, “We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”
And on X, Mulligan argued that OpenAI’s agreement “is materially better than where the entire industry’s conversation about classified deployments was a week ago”, arguing that its terms can “help the entire industry establish new norms about government use of AI for military purposes.”

The whole discussion showed that the community of users has a role to play — sometimes by their choice of software, and at other times by engaging in the thoughtful discussions that are happening.
As the event came to a close, Altman posted one last appreciative tweet. “I am on the whole very grateful for the level of reasonable and good-faith engagement here. It was not what I expected.”