The Pentagon quietly moved the chess pieces this week. It cleared eight major tech firms to run their advanced AI models inside the Department of Defense’s classified networks. This is a big change for how America thinks about warfighting, and it deserves a clear-eyed look — with equal parts caution and backbone.
What the Pentagon actually did
Who made the cut — and who didn’t
The Department of Defense announced agreements to deploy frontier AI capabilities on its highest-security networks, known as IL6 and IL7. The companies now approved include Google, Microsoft, Amazon Web Services, NVIDIA, OpenAI, SpaceX, Reflection, and Oracle. Noticeably absent was Anthropic, which has been in a public spat with the administration over safety and policy. These deals are part of the Pentagon’s AI Acceleration Strategy, which aims to speed up decision-making, intelligence work, and logistics with advanced machine learning.
Why this matters for warfighting and national security
AI can speed up decisions in the battlespace by milliseconds. That’s the difference between a successful mission and catastrophic failure. It can sort data, spot patterns, and help commanders see the battlefield faster. But speed brings hard questions. Machines make choices in microseconds while humans react far slower. Who signs off on a strike when an algorithm says “go”? Human oversight provisions are mentioned, but the details still matter. We can’t hand over life-and-death calls to black-box models without clear rules and tests.
Risks we can’t pretend aren’t there
There are real security risks here: model integrity, data leakage, adversary manipulation, and misidentification of targets. The tech giants are capable, but commercial models were not built for weapons systems. That gap must be closed with rigorous validation, red-team testing, and clear accountability. If an AI mislabels a target, someone in the chain must be responsible — not some anonymous algorithm. And if a company refuses to comply with safety checks, it shouldn’t get a pass because their dashboard is pretty.
How to get this right — and fast
We should cheer the administration when it moves to modernize our military. But cheering isn’t the same as blind trust. Congress must provide oversight. Independent testing and stress tests must be required. Human-in-the-loop rules should be ironclad, not fuzzy. We should also prepare countermeasures because adversaries will pursue similar tech. America must lead in capability and rules of engagement. Otherwise we’ll have speed without control — and that’s a dangerous combo.
In short: good for getting ahead, but don’t confuse speed with safety. The Pentagon’s step into classified AI is the right direction, as long as leaders keep humans squarely in charge and demand world-class safeguards. If we do that, AI will be a force multiplier. If we don’t, it will be an expensive, fast-moving mess — and our enemies will be watching closely.

