A roadmap for AI, if anyone wants to listen


While Washington’s break with Anthropic exposed the complete lack of coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has put together something the government has so far refused to produce: a framework for what responsible AI development should actually look like.

He prohuman declaration was finalized before last week’s Pentagon-Anthropic showdown, but the collision of the two events did not go unnoticed by anyone involved.

“Something quite remarkable has happened in the United States in the last four months,” said Max Tegmark, an MIT physicist and AI researcher who helped organize the effort. in conversation with this editor. “Polls suddenly (show) that 95% of all Americans oppose an unregulated race toward superintelligence.”

The newly released document, signed by hundreds of experts, former officials and public figures, begins with the sensible observation that humanity is at a crossroads. One path, which the statement calls “the race to replacement,” leads to humans being supplanted first as workers, then as decision-makers, as power accumulates in unaccountable institutions and their machines. The other leads to AI that greatly expands human potential.

This latter scenario depends on five key pillars: keeping humans in charge, avoiding concentration of power, protecting human experience, preserving individual freedom, and holding AI companies legally accountable. Among its most forceful provisions is a complete ban on the development of superintelligence until there is a scientific consensus that it can be done safely and with genuine democratic acceptance; mandatory shutdowns on powerful systems; and a ban on architectures that are capable of self-replication, autonomous improvement, or resistance to closure.

The release of the statement coincides with a period that makes it much easier to appreciate its urgency. On the last Friday of February, Defense Secretary Pete Hegseth named anthropic appointee – whose AI already works on classified military platforms – a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology, a label normally reserved for companies with ties to China. Hours later, OpenAI closed its own deal with the Department of Defenseone that legal experts say will be difficult to enforce in any meaningful way. What all of this laid bare is how costly congressional inaction on AI has become.

As Dean Ball, a senior fellow at the Foundation for American Innovation, told the New York Times Then, “This is not just a dispute over a contract. This is the first conversation we’ve had as a country about control of AI systems.”

Technology event

San Francisco, CA
|
October 13-15, 2026

Tegmark turned to an analogy that most people can understand when we talk. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t let them release anything until it’s safe enough.”

Turf wars in Washington rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to break the current impasse. Indeed, the statement calls for mandatory pre-deployment testing of AI products (particularly chatbots and companion apps aimed at younger users) covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.

“If a creepy old man texts an 11-year-old boy pretending to be a girl and tries to persuade him to commit suicide, the guy can go to jail for it,” Tegmark said. “We already have laws. It’s illegal. So why is it different if it’s done by a machine?”

He believes that once the principle of pre-release testing of children’s products is established, the scope will almost inevitably expand. “People will come and say, let’s add some other requirements. Maybe we should also test that this can’t help terrorists make biological weapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”

It is no small feat that former Trump adviser Steve Bannon and Susan Rice, President Obama’s national security adviser, signed the same document, along with former Chairman of the Joint Chiefs of Staff Mike Mullen and progressive religious leaders.

“What they agree on, of course, is that everyone is human,” Tegmark says. “If it comes down to whether we want a future for humans or a future for machines, of course they will be on the same side.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *