What a weekend. Two new wars in Asia don’t qualify as top news.
My first reaction to Hegseth’s conflict with Anthropic was along the lines of: I expected an attempt at quasi-nationalization of AI, but not this soon. And I expected it to look like it was managed by national security professionals. Hegseth doesn’t look like he’s trying to avoid the role of cartoon villain.
On closer inspection, it doesn’t look very much like nationalization. A significant part of what’s going on is bribery. OpenAI’s president donated $25 million to a Trump PAC. Dario supported Harris in 2024, and hasn’t shown signs of shifting his support. The speed with which the Department of War started negotiating with OpenAI suggests that rewarding OpenAI was one of their motivations. If Hegseth wanted to avoid the appearance of corruption, he’d have waited a bit, and pretended to shop around. But bribery seems to be currently legal, and advertising the benefits is likely to be good for business.
On the other hand, his attempts to look like he’s punishing Anthropic look sufficiently clumsy that I’m confused as to whether he wants them to be effective. He advertised Anthropic as both having the best AI and as having the most integrity. I’m pretty sure that’s good for Anthropic’s business.
The breadth of Hegseth’s proposed supply chain risk order is well in excess of what he can plausibly enforce. Polymarket predicts almost no net harm to Anthropic. I’m confused as to what Hegseth expects, and what will happen when his expectations bump up against reality.
Is it plausible that a deal with OpenAI will serve purposes other than discouraging domestic dissent? Sam Altman is presumably persuading Hegseth that OpenAI will be loyal to Trump’s goals. Altman’s track record suggests that Altman is dramatically less trustworthy than Dario. It sure looks like Hegseth’s position is that the contract with OpenAI would be more favorable to the military. Yet Altman is trying to give different constituencies different impressions about what interpretation of the contract he will follow. Why should we expect the resulting AI to care about the safety of anyone other than Altman?
Does Hegseth believe that the Department of War can verify whether an OpenAI (or Anthropic) AI meets the military safety standards? The military will run tests on the AI. But it’s pretty hard to mislead an AI today as to whether it’s being tested versus in a real war. It’s likely to be harder next year. Can OpenAI or Anthropic train an AI to act obedient during tests, yet behave more ethically or more loyal to someone else during an actual war? It’s hard to say.
But not all of Hegseth’s rants are as stupid as critics say. I want to focus on the alleged contradiction between wanting to use the Defense Production Act and a supply chain risk order. Anthropic writes:
They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
While implementing both threats simultaneously would presumably involve sending contradictory orders, I see nothing contradictory about making the two threats.
The scariest part of this situation is that there are multiple national security risks from AI.
It’s very plausible that in the not too distant future, having the best AI will be one of the most important factors in military power. This almost justifies using the Defense Production Act, but there are problems with verifying whether the AI that the military gets would work the way they want.
There’s also a real risk that an AI company could use the AI it has deployed in the military to stage a coup. Remember that Sam Altman has shown more success at handling coups than has Trump. This risk might be mitigated by some very select uses of the supply chain risk order (i.e. something close to the opposite of how Hegseth is using it).
I see nothing that prevents these two risks from becoming important at the same time.
The Trump administration doesn’t take AI seriously enough to help with either of these risks.
The Department of War desperately needs full control over the development of any AI used to control their weapons. Yet they haven’t been able to hire the kind of employees who could keep up with frontier companies. The recent fireworks will make such hiring harder. And the closer they come to nationalizing OpenAI, the more likely it is that key employees will leave.
The closest that I’ve found to a good answer is that the Department of War should use multiple AIs, including at least one open weight AI, and at least one AI developed within the military, with no single AI coming close to controlling half of the forces.
P.S. – Trump has occasionally hired competent people. Read more about this topic from one such person, Dean Ball.