The United States military has taken a defining step towards becoming what it calls an "AI-first fighting force." This month, the Pentagon announced agreements with eight of America's most powerful technology companies to deploy artificial intelligence directly onto its most classified military networks.
The agreement includes Google, Microsoft, OpenAI, SpaceX, Nvidia, Amazon Web Services, Oracle and a startup called Reflection.
The deals, confirmed by the Defense Department, mark a watershed moment in the integration of AI into modern warfare. They also expose a deepening divide over who controls the technology, how far it can go, and what happens when ethics and national security collide.
Here are five things to know:
What the Pentagon actually signed
The Pentagon said the eight companies' AI systems are now approved for networks classified at Impact Level 6, which handles secret data, and Impact Level 7, a designation covering the most highly classified military systems.
According to the Associated Press, the technology will help the military reduce the time it takes to identify and strike targets, while also aiding in weapons maintenance and supply chain logistics.
The Pentagon said military personnel are already using AI through its official platform, GenAI.mil, with over 1.3 million Defense Department users on the system. The announcement accelerates what the Pentagon describes as its push to "build an architecture that prevents AI vendor lock" by diversifying its AI suppliers across both open-source and proprietary models.
In short, the US military is not testing AI, it is deploying it, at scale, in some of the most sensitive environments in the world.
The company that said ‘no’
One major AI company was absent from the list: Anthropic, which refused to join the AI deal over ethical concerns.
First, it opposes using its Claude AI for mass domestic surveillance, arguing that AI-powered monitoring of Americans' movements, browsing habits and associations is incompatible with democratic values.
Second, it refuses to power fully autonomous weapons that remove humans entirely from targeting decisions, saying today's AI is simply not reliable enough for such life-or-death calls.
A dispute erupted after Anthropic’s refusal, leading the Pentagon to subsequently designate it a supply chain risk, the first time such a label had ever been applied to an American company.
Anthropic filed two separate lawsuits against the US administration, asking federal judges in San Francisco and Washington to overturn the order. A federal judge later blocked the government's effort, calling the designation likely "pretextual."
Despite the legal battle, signs of possible resolution have emerged. Trump's chief of staff Susie Wiles met with Anthropic CEO Dario Amodei at the White House, after which Trump told CNBC that a deal was "possible," saying "they're very smart, and I think they can be of great use."
The Pentagon-Anthropic standoff seems to have exposed how fragile the tech sector's rhetoric around "safe and ethical AI" is, raising the uncomfortable question of whether companies' ethical commitments are genuine principles or simply tools for reputation management.
What will the companies do?
The eight companies represent the full spectrum of American AI power. OpenAI brings its ChatGPT models, already deployed in some Pentagon environments since March. Google contributes its Gemini models, which were rolled out on GenAI.mil in April.
Microsoft and Amazon Web Services provide the cloud infrastructure backbone. Nvidia supplies its open-source Nemotron models, designed to enable AI agents to execute tasks autonomously. SpaceX qualifies as an AI vendor following its merger with Elon Musk's xAI, giving it access to the Grok family of models.
Reflection, the only newcomer, is a startup backed by Nvidia and run by former Google DeepMind researchers.
The Pentagon said the companies' tools will "streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments."
Oracle was added to the list hours after the initial announcement, bringing the total to eight.
Ethics alarm: Who is watching?
The deals have triggered immediate alarm from civil liberties groups and AI experts. Questions about human oversight, transparency and the risk of deploying AI in high-stakes military settings remain largely unresolved.
The Pentagon demanded that AI providers agree their models can be used for "any lawful use," effectively barring companies from placing their own limits on how the military deploys their technology. Civil liberties groups, however, warned that this broad requirement leaves the door open to the kind of autonomous weapons targeting and domestic mass surveillance that Anthropic refused to authorise.
Helen Toner, interim executive director at Georgetown University's Center for Security and Emerging Technology, told the AP that while AI can help summarise information and scan surveillance feeds, questions about human involvement, risk and training are still being worked out.
The Pentagon's announcement did not specify financial terms, detail how the AI tools would be overseen, or clarify what safeguards would prevent misuse in the field.

New arms race in AI warfare
The Pentagon's move does not exist in a vacuum. China has been aggressively developing its own military AI capabilities, and the race to dominate AI-powered warfare is now firmly under way between the world's two superpowers.
The Pentagon said its growing AI capabilities will "give warfighters the tools they need to act with confidence and safeguard the nation against any threat," with the military already cutting many tasks "from months to days."
Analysts warn, however, that speed and capability come with serious risks including the danger of miscalculation, uncontrolled escalation and the erosion of human judgement in life-or-death decisions.
The Anthropic lawsuit, still working its way through the courts, may ultimately define the legal boundaries of AI in warfare. For now, the machines are already inside the Pentagon's most classified rooms and the rules of engagement are still being written.








