The Trump administration blacklisted Anthropic; now telling banks to use its AI



Bottom line: Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell are urging Wall Street’s biggest banks to test Anthropic’s Mythos AI model to detect cybersecurity vulnerabilities, even as the Pentagon fights Anthropic in court after calling it a supply chain risk for refusing to remove guardrails on autonomous weapons and mass surveillance. JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the model. Mythos, which found thousands of zero-day flaws in major operating systems and browsers, is distributed through a restricted program called Project Glasswing to about 50 organizations. UK regulators are also struggling to assess the risks.

The Trump administration is quietly encouraging America’s largest banks to test the same technology from the artificial intelligence company it has spent two months trying to destroy. Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell this week summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America and Morgan Stanley and urged them to use Anthropic’s new Mythos model to detect cybersecurity vulnerabilities in their systems. according to Bloomberg.

The recommendation stands out for its contradiction. Anthropic is currently fighting the Department of Defense in federal court after Defense Secretary Pete Hegseth designated the company as “supply chain risk“, a label that excludes it from military contracts and orders defense contractors to stop using its technology. The designation came after Anthropic refused to remove two security restrictions from its AI models: no use in fully autonomous weapons and no deployment for mass surveillance of US citizens.

Now, two of the administration’s top economic officials are calling on Wall Street to embrace the same product the Pentagon has sought to blacklist.

What Mythos really does

Claude Mythos Preview is a frontier model that Anthropic did not explicitly train for cybersecurity. The ability to find vulnerabilities emerged as what the company describes as a downstream consequence of overall improvements in code reasoning and autonomous operation. During testing, Mythos identified thousands of zero-day vulnerabilities – flaws previously unknown to software developers – across all major operating systems and web browsers.

The capabilities were significant enough that Anthropic decided not to release the model publicly. Instead, it launched Project Glasswing, a controlled program that provides access to approximately 50 organizations, including Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, and JPMorgan Chase. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open source security organizations as part of the initiative.

The frame, a model”too dangerous to release“, has generated skepticism. Tom’s Hardware noted that the claims of “thousands” of the serious zero-day discoveries were based on just 198 manual reviews, and that many of the vulnerabilities flagged were in older software or were impractical to exploit. Others in the security community suggested that the restricted version was less like Responsible AI Governance and rather a smart business sales strategy: create scarcity, create fear and let customers come to you.

The Pentagon Paradox

The collision between the Bessent-Powell recommendation and the Hegseth appointment is not a matter of mixed signals: it is two branches of the same administration pursuing openly contradictory policies towards the same company.

The dispute with the Pentagon began in February, when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to remove the company’s security restrictions or lose its $200 million defense contract. Amodei refused. Hegseth responded by declaring that Anthropic was a supply chain risk and President Trump ordered federal agencies to stop using its technology. A Pentagon official accused Amodei of having a “God complex.“Trump called Anthropo a”Radical left, woke company.

Since then, the courts have been divided. A federal judge in California issued a preliminary injunction blocking the supply chain designation, writing that “Nothing in the governing statute supports the Orwellian notion that an American company can be branded as an adversary and potential saboteur of the United States for expressing disagreement with the government.An appeals court in Washington, D.C., denied Anthropic’s request to temporarily halt the blacklisting while the case proceeds. The net effect: Anthropic is excluded from Department of Defense contracts, but can continue to work with other government agencies.

It is into that gap, excluded from the Pentagon but not from the Treasury or the Federal Reserve, that Bessent and Powell entered this week.

What banks are really doing

JPMorgan Chase was the only bank listed as an official Project Glasswing partner, but Bloomberg reported that Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are testing Mythos internally. Use cases reportedly include vulnerability detection, flagging fraud risks, and automating compliance workflow across financial systems.

The speed of adoption reflects genuine fear. If Mythos can find zero-day vulnerabilities in operating systems and browsers, it can presumably find them in banking infrastructure as well, as can any sufficiently capable model that follows. The defensive logic is simple: it is better to find the holes before the opponent’s AI does.

The regulatory response has been international. The Financial Times reported that UK officials at the Bank of England, the Financial Conduct Authority and HM Treasury are in talks with the National Cyber ​​Security Center to examine potential vulnerabilities highlighted by Mythos. Representatives of the main British banks, insurers and stock exchanges are expected to receive information within fifteen days.

The uncomfortable implication

The Mythos episode exposes a structural problem in the administration’s approach to AI. The same government that called Anthropic a national security threat because it refused to remove safety railings now calls on the financial system to rely on Anthropic’s technology for its own security. The message to Anthropic is inconsistent: It’s too dangerous to trust it with defense contracts, but it’s indispensable enough for the Treasury Secretary to personally call bank CEOs to recommend its product.

For Anthropic, contradiction is strategically useful. Each bank that adopts Mythos deepens the integration of the company in critical national infrastructurewhich makes the supply chain designation seem increasingly absurd. For the administration, the episode reveals what happens when national security policy is driven by personal grievances rather than a coherent strategy: The left hand blacklists what the right hand is busy deploying.

The banks, for their part, do not seem concerned about the contradiction. When the Secretary of the Treasury and the chairman of the Federal Reserve tell you to try something, you try it, regardless of what the authorities say. The Pentagon thinks about the company that made it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *