Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic filed two affidavits in federal court in California on Friday afternoon, rejecting the Pentagon’s claim that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case is based on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute.
The statements were filed along with Anthropic’s response brief in its lawsuit against the Department of Defense and are ahead of a hearing next Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly stated they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its artificial intelligence technology.
The two people who filed the statements are Sarah Heck, Anthropic’s chief policy officer, and Thiyagu Ramasamy, the company’s public sector director.
Heck is a former National Security Council official who worked in the White House during the Obama administration before moving to Stripe and then Anthropic, where she leads the company’s government relations and political work. He was personally present at the Feb. 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and Pentagon Undersecretary Emil Michael.
in it statementHeck exposes what she describes as a central falsehood in government documents: that Anthropic required some form of approval paper on military operations. That statement, he claims, is simply not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that type of position,” he wrote.
He also claims that the Pentagon’s concern about Anthropic disabling or altering its technology mid-operation was never raised during negotiations. Instead, he says, it first appeared in government court filings, which did not give Anthropic a chance to respond.
Technology event
San Francisco, CA
|
October 13-15, 2026
Another detail from Heck’s statement that is sure to raise eyebrows is that on March 4 (the day after the Pentagon formally ended its supply chain risk designation against Anthropic) Undersecretary Michael emailed Amodei to say that the two sides were “very close” on the two issues that the administration now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and American mass surveillance.
The email, which Heck attaches as evidence of his statement, is worth reading, along with what Michael said publicly in the days afterward. On March 5, Amodei released a statement saying that the company had been having “productive conversations” with the Pentagon. The next day, Michael published in X that “there is no active War Department negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of resuming talks.
Heck’s point seems to be: If Anthropic’s stance on those two issues is what makes it a national security threat, why did the Pentagon official himself say that the two sides were almost exactly aligned on those issues right after the designation was finalized? (It stops short of saying that the government used the designation as a bargaining chip, but the timeline it presents leaves the question hanging.)
Ramasamy brings a different kind of experience to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government clients, including classified environments. At Anthropic, he is credited with creating the team that brought his Claude models into national security and defense environments, including the 200 million dollar contract with the Pentagon announced last summer.
His statement assumes the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or altering its behavior, which Ramasamy says is not technically possible. By his account, once Claude is deployed within a government-secured “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; There’s no remote kill switch, no backdoor, and no mechanism for sending unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change in the model would require explicit Pentagon approval and action to install it.
Anthropic, he says, can’t even see what government users are writing to the system, let alone extract that data.
Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreign nationals makes the company a security risk. It notes that Anthropic employees have undergone a US government security clearance check (the same background check process required to access classified information) and adds in its statement that, “to my knowledge,” Anthropic is the only AI company where authorized personnel actually built the AI models designed to run in classified environments.
Anthropic’s lawsuit argues that the supply chain risk designation, the first applied to a U.S. company, amounts to government retaliation for the company’s publicly expressed views on AI safety, in violation of the First Amendment.
The government, in a 40-page document earlier this week, rejected that framing completelysaying that Anthropic’s refusal to allow all legal military uses of its technology was a business decision, not protected speech, and that the designation was a simple call for national security and not punishment for the company’s views.