More than 580 Google employees, including DeepMind researchers, urge Pichai to reject classified Pentagon AI deal



TL;DR

More than 580 Google employees, including more than 20 directors, vice presidents and senior researchers at DeepMind, signed a letter urging CEO Sundar Pichai to reject classified military AI jobs for the Pentagon. The letter argues that in isolated classified networks, Google cannot monitor how its AI is used, making “trust us” the only barrier against autonomous weapons and mass surveillance. Google’s workforce won the Project Maven fight in 2018, but the company has since removed gun language from its AI principles, won a share of the $9 billion JWCC cloud contract, deployed Gemini to 3 million Pentagon employees, and is now negotiating classified access under “all legal uses” terms.

More than 580 Google employees, including more than 20 directors, senior managers and vice presidents, signed a letter urging CEO Sundar Pichai to reject classified military AI jobs for the Pentagon. according to Bloomberg. The letter, which includes senior researchers at Google DeepMind, was sent to Pichai on Monday. “We are Google employees who are deeply concerned about the ongoing negotiations between Google and the US Department of Defense,” it reads. “As people who work in AI, we know that these systems can centralize power and that they make mistakes.” The signatories want Google to reject all classified workloads, arguing that on classified networks isolated from the public Internet, the company would have no ability to monitor or limit how its AI tools are actually used. “Currently, the only way to ensure that Google is not associated with such harms is to reject any classified workloads,” the letter states. “Otherwise, such uses may occur without our knowledge or power to stop them.”

the story

Google employees have fought this battle before. In 2018, about 4,000 workers signed an internal petition and at least 12 resigned over Project Maven, a Pentagon program that used artificial intelligence to detect and analyze objects in drone video feeds. The outcry forced Google to introduce AI principles by pledging not to use weapons or surveillance technology, and to let Maven’s contract expire in March 2019. Palantir took over. Maven’s contract was worth a few million dollars. Since then, Palantir’s investment in Maven has grown to $13 billion. The 2018 victory was real, but it was also the last time Google’s workforce successfully limited the company’s defense ambitions. In the years since, Google has systematically rebuilt all the bridges burned by the protest.

In December 2022, Google won a share of the Pentagon’s $9 billion Joint Warfare Cloud Capability contract along with Amazon, Microsoft, and Oracle. In February 2025, Google removed the passage from its AI principles that promised to avoid using the technology in “weapons or other technologies whose primary purpose or implementation is to directly cause or facilitate harm to people” and to avoid “technologies that collect or use information for surveillance that violate internationally accepted standards.” A blog post co-authored by Demis Hassabis, CEO of Google DeepMind, cited “a global competition taking place for AI leadership” as justification. Human Rights Watch and Amnesty International condemned the revocation. In December 2025, the Pentagon launched GenAI.mil, a platform powered by Google’s Gemini chatbot, available to all defense personnel. Defense Secretary Pete Hegseth said that “the future of American warfare is here, and it spells AI.” In March 2026, Google deployed Gemini AI agents to the Pentagon’s three million-person workforce at the unclassified level, with eight pre-designed agents for tasks that included summarizing meeting notes, developing budgets, and verifying actions against defense strategy.

the negotiation

The classified deal is the next step. Emil Michael, undersecretary of Defense for research and engineering, told Bloomberg in March that the Pentagon would “start with the unclassified because that’s where most of the users are, and then we’ll get to the classified and top secret.” He confirmed that talks were already underway with Google about using Gemini agents in classified cloud infrastructure. In April, The Information reported that negotiations are moving toward “all legal uses” of Google’s AI tools, a phrase that falls short of the red lines Anthropic established before being designated as a supply chain risk by the Pentagon for refusing to remove restrictions on autonomous weapons and domestic mass surveillance. The Pentagon strongly disputed Anthropic’s characterization and argued that commercial companies should not be able to dictate usage policies during times of war or preparations for war.

OpenAI signed its own deal with the Pentagon hours after Anthropic’s blacklistwith three established red lines: no massive internal surveillance, no autonomous weapons and no high-risk automated decisions. But the question that Google employees are asking themselves is the application of these red lines in classified networks. In an isolated system, the AI ​​operates on a network that, by design, is disconnected from Google’s infrastructure. Google cannot see what queries are run, what results are generated, or what decisions are made with those results. The “trust us” assurance from Pentagon leaders is the only mechanism that prevents uses that would violate any red lines the company may negotiate. Sofia Liguori, an artificial intelligence research engineer at Google DeepMind in the United Kingdom, who signed the letter, told Bloomberg that the main response to workers’ concerns has been to encourage the workforce to trust the company’s leadership to sign good contracts. “But everything is very broad,” he said. “Agent AI is particularly worrying because of the level of independence it can achieve. It’s like giving away a very powerful tool and at the same time giving up any kind of control over its use.”

What’s at stake

The Pentagon’s AI budget tells the story of what the classified deal would fund. The fiscal year 2026 defense budget included $13.4 billion dedicated to AI and autonomy. The fiscal year 2027 request, submitted in April, requests $54.6 billion for the Defense Autonomous Warfare Group, a 24,000% increase over the previous year, within a total defense budget of $1.5 trillion that represents a 42% year-over-year increase. The Pentagon is already testing humanoid robot soldiers with Foundation Future Industries and has formalized Palantir’s Maven as a core military system with multi-year funding. The scale of military investment in AI has moved from the experimental phase that characterized Project Maven in 2018 to an industrial expansion that treats AI as a core capability of the US military. The classified workloads that Google employees oppose would be at the center of that development.

The organizers of the letter said: “Maven is not over. Workers will continue to organize against the weaponization of Google’s AI technology until the company establishes clear, enforceable lines.” The framing is significant. In 2018, the fight revolved around a contract per show. In 2026, the fight is over whether Google’s entire AI stack, Gemini, DeepMind’s research, the TPU chips that power inference, become military infrastructure on classified networks where no one outside the Pentagon can see what it does. The paradox of the administration blacklisting Anthropic and urging banks to adopt its AI illustrates the political environment: companies that resist unrestricted military use face designation of supply chain risks, while companies that comply receive contracts worth billions. Google employees are calling on Pichai to reject a deal that the Pentagon has made clear it will punish for rejecting it, at a time when the company has spent three years rebuilding its defense credentials precisely to win that deal.

the gap

The 580 signatures stand out for their age. Twenty directors, senior directors and vice presidents have signed, along with senior DeepMind researchers. Two-thirds of the signatories agreed to be named; a third requested anonymity for fear of retaliation. A previous business-to-business letter from February, signed by approximately 800 Google employees and 100 OpenAI employees, expressed support for Anthropic’s stance against unrestricted military use of AI. More than 100 DeepMind employees separately signed an internal letter demanding that DeepMind research or models not be used for weapons development or autonomous targeting. Google chief scientist Jeff Dean wrote in X that “mass surveillance violates the Fourth Amendment and has a chilling effect on free speech.” Internal dissent is not marginal. It extends to the technical leadership that builds the systems the Pentagon wants to deploy.

But the gap between internal dissent and corporate decision-making has widened since 2018. In 2018, 4,000 signatures and a dozen resignations were enough to void a contract worth a few million dollars. In 2026, 580 firms face a classified AI market worth tens of billions, a Pentagon that has shown it will retaliate against companies that reject its terms, a company that has already removed its own red lines, and a CEO who approved the deployment of Gemini to three million Pentagon employees without, according to the letter’s organizers, discussing concrete usage restrictions with the workforce that built it. Trump has shown openness to a Pentagon agreement with Anthropic if the company removes its restrictions, suggesting that management views compliance as the ultimate destination for every AI company, regardless of where it starts. Google employees ask their company to be an exception. The company’s track record over the past three years suggests it aims to be the norm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *