Pentagon signs AI deals with OpenAI, Google, Microsoft, Nvidia and others, eliminating Anthropic


The Pentagon has closed artificial intelligence agreements with OpenAI, Google, microsoft, AmazonOracle, Nvidia, SpaceXand AI Reflection. These agreements expand the use of AI within the Department of Defense to classified environments, supporting activities such as analytics, logistics, and large-scale data processing. The Department of Defense did not include anthropic in the deals, citing concerns about a supply chain risk following a contract dispute.

Emil Michael, undersecretary of Defense for research and engineering, described the initiative as providing military personnel with artificial intelligence tools to maintain an advantage and achieve superiority in decision-making, according to the Wall Street Journal.

What each AI partner brings to the Pentagon

Microsoft, Amazon, and Oracle provide AI models and the cloud infrastructure necessary to run them within existing secure frameworks. This approach helps avoid creating new classified computing environments from scratch.

Nvidia’s deal focuses on its open source Nemotron models, which are designed to support autonomous agents capable of completing multi-step tasks. Open source models allow the Department of Defense to inspect and modify internal architecture for specific use cases. Nvidia CEO Jensen Huang has stated that transparency in model architecture improves, rather than undermines, security in national security environments.

Included in the effort is Reflection AI, a startup backed by Nvidia and founded by former Google DeepMind researchers. The company has not yet publicly released a model, but is involved in a government-backed project to develop models tailored to the South Korean market. It is also reportedly seeking financing at a valuation of around $25 billion.

Why Anthropic Was Excluded from Pentagon AI Deals

Anthropic’s Claude models were among the few AI systems accessible in classified Pentagon environments, primarily through Palantir’s Maven platform. After a contract dispute, the Department of Defense called Anthropic a supply chain risk and cut off access to its systems.

During congressional testimony, Defense Secretary Pete Hegseth called Anthropic CEO Dario Amodei an “ideological lunatic.” Anthropic is now challenging the supply chain risk designation in court. Previously, Claude models had been used in military operations, including during the conflict with Iran and in efforts against Venezuelan leader Nicolás Maduro.

What limits have the Pentagon and artificial intelligence companies imposed on the use

Several companies involved in the deals have stated that their technologies will not be used for mass surveillance or autonomous weapons systems. The Department of Defense has reiterated that such uses would be illegal and has asked participating companies to trust its oversight mechanisms.

However, no independent mechanism to enforce these restrictions has been publicly outlined.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *