OpenAI robotics chief resigns over Pentagon deal


Caitlin Kalinowski spent 16 months developing OpenAI’s physics AI program. On Saturday, he said the company moved too quickly on something too important.


The week that began with Anthropic being blacklisted by the Pentagon and ended with OpenAI taking its contract has now claimed OpenAI’s top hardware executive.

Caitlin Kalinowski, who joined OpenAI in November 2024 to lead its consumer hardware and robotics division, announced her resignation on Saturday at X. Her statement was brief, direct, and more candid than anything OpenAI itself has said about the deal.

“AI has an important role in national security” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they received.”

The 💜 of EU technology

The latest rumors from the EU tech scene, a story from our wise founder Boris and some questionable AI art. It’s free, every week, in your inbox. Register now!

In a later post, she was more precise about the nature of the complaint. “It is, above all, a governance concern,” he wrote. “These are too important to rush deals or announcements.”

Kalinowski was careful to frame his departure in personal terms. “It was about principles, not people,” he wrote. “I have deep respect for Sam and the team.”

This last note carries some weight: Sam Altman himself has acknowledged that the agreement with the Pentagon was “definitely rushed” and that its implementation produced a significant negative reaction.

What Kalinowski’s resignation adds to that admission is a name and a title: The most senior person at OpenAI, whose job it was to bring AI into physical systems, has decided that the process by which it will now enter weapons systems and surveillance infrastructure was not good enough.

What did the deal entail?

The sequence of events leading up to this point unfolded over the course of about a week. Anthropic, which had been the only artificial intelligence company authorized to operate on the Pentagon’s classified networks, following a $200 million contract awarded in July 2025, spent several weeks in tense negotiations with the Department of Defense over the conditions of continued use.

Anthropic’s position was that its models should not be deployed for mass domestic surveillance or fully autonomous weapons. The Pentagon, under the direction of Defense Secretary Pete Hegseth, insisted on language that would allow use “for all lawful purposes,” without specific exceptions.

On February 28, as negotiations broke down, President Trump ordered all federal agencies to stop using Anthropic’s technology and called the company a “radical awakener” in Truth Social.

Hegseth formally designated Anthropic as a national security supply chain risk, a classification previously reserved for foreign adversaries and requiring Department of Defense suppliers and contractors to certify that they do not use Anthropic models.

Hours later, Altman posted on X that OpenAI had reached its own agreement to deploy its models on the Pentagon’s classified network.

OpenAI’s stated position is that its deal includes the same core protections that Anthropic sought: no massive internal surveillance, no autonomous weapons.

The company published a blog post outlining its approach and arguing that its cloud-only deployment architecture, its retained security stack, and its contractual provisions — anchored in existing U.S. law rather than tailored prohibitions — make its deal stronger than any previous classified AI deployment, including Anthropic’s.

What Kalinowski’s departure means for OpenAI

Kalinowski’s career before OpenAI was unusual in its breadth. He spent nearly six years at Apple as a technical lead on the Mac Pro and MacBook Air programs, including the original unibody MacBook Pro, before moving to Meta’s Oculus division, where he led virtual reality hardware for more than nine years.

His last role at Meta was leading Project Nazare, later called Orion, the augmented reality glasses initiative that Meta prototyped in September 2024 and described as the most advanced AR glasses ever made.

He joined OpenAI the following month.

During his 16 months at OpenAI, Kalinowski developed what the company describes as its physical AI program, including a San Francisco lab that employs about 100 data collectors who train a robotic arm on menial tasks.

His departure leaves that effort without its most experienced hardware leader at a time when OpenAI has staked considerable ambition on moving beyond software.

OpenAI confirmed its resignation on Saturday, saying in a statement: “We believe our agreement with the Pentagon creates a viable path for responsible uses of AI in national security, while making our red lines clear: no internal surveillance or autonomous weapons.

“We recognize that people have strong opinions on these issues and we will continue to engage in discussions with employees, government, civil society and communities around the world.”

The bigger picture

The consequences of OpenAI’s deal with the Pentagon have not been limited to internal dissent. ChatGPT uninstalls reportedly increased by 295% after the announcement, and Anthropic’s Claude rose to the number one position in the US App Store, displacing ChatGPT. As of Saturday afternoon, the two apps remained in first and second place, respectively.

What the resignation of the company’s robotics chief on Thursday confirms is that the costs of the deal for OpenAI are still being tallied. Altman wanted to reduce the confrontation between the government and the AI ​​industry. He may still have made it. Whether the price of that de-escalation was worth paying, in talent, in trust, and in the specific question of who was right about the safety barriers, is a question that will take longer to answer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *