AIRegulation – Trump Orders Federal Agencies to Halt Anthropic Technology Use
AIRegulation – US President Donald Trump has directed all federal agencies to immediately stop using technology developed by artificial intelligence company Anthropic, intensifying a high-profile standoff over the military application of advanced AI systems.

In a statement posted on Truth Social, Trump accused the company of attempting to impose its own restrictions on the US military. He said the administration would not allow what he described as a politically biased technology firm to influence how the armed forces operate. The president characterized Anthropic’s actions as a serious misstep and asserted that national defense decisions must remain under constitutional authority.
Immediate Suspension and Six-Month Phase-Out
The president’s order instructs every federal department and agency to cease using Anthropic’s products without delay. For agencies currently relying on the company’s systems, the administration has outlined a six-month transition period to discontinue those services.
Trump also warned of potential civil and criminal consequences if the company does not comply with federal directives during the transition. The White House has framed the move as a necessary step to safeguard national security interests and maintain operational flexibility within defense agencies.
Core Dispute Over AI Applications
At the heart of the disagreement are two specific uses of Anthropic’s AI model, Claude, that the company has declined to support: mass domestic surveillance and the deployment of fully autonomous weapons.
Anthropic confirmed that discussions with the Department of War had stalled over what it described as two narrow exceptions related to lawful use of its technology. According to the company, it is prepared to support AI applications tied to national security, with the exception of systems designed for widespread monitoring of Americans and weapons capable of operating without human oversight.
The firm stated that it had sought a workable agreement in good faith and emphasized that the disputed limitations had not disrupted any known government missions to date.
Company Raises Safety and Rights Concerns
Anthropic argued that advanced AI models are not yet reliable enough to be entrusted with fully autonomous combat decisions. The company warned that such deployments could put both military personnel and civilians at risk.
It also raised concerns about civil liberties, stating that large-scale domestic surveillance programs would infringe upon fundamental rights. The company maintained that its position reflects both safety considerations and constitutional principles.
Defense Department Response
Secretary of War Pete Hegseth announced plans to designate Anthropic as a supply-chain risk to national security. Writing on social media platform X, Hegseth said the Department must retain unrestricted access to AI tools for all lawful defense purposes.
He further stated that no contractor or partner working with the US military would be permitted to engage in commercial activities with Anthropic during the transition period. However, the department indicated that limited services could continue for up to six months to ensure continuity while alternatives are arranged.
Anthropic described the proposed designation as unprecedented for a US-based company, noting that similar actions have historically been reserved for foreign adversaries. The company said it would challenge any formal designation through legal channels if necessary.
Lawmakers Voice Opposition
The administration’s directive drew sharp criticism from several Democratic lawmakers. Senator Mark Warner expressed concern that national security policy could be influenced by political considerations rather than careful evaluation.
In a joint statement, Senators Chris Van Hollen and Edward Markey described the Pentagon’s actions as an alarming overreach of executive authority. They characterized the move as retaliatory and warned it could set a troubling precedent for government engagement with private technology firms.
Representative Zoe Lofgren also criticized what she called heavy-handed tactics, while Congresswoman Valerie Foushee emphasized that AI developers have an obligation to uphold the safety commitments they have publicly endorsed.
The unfolding dispute highlights broader tensions between federal authorities and technology companies over the responsible use of artificial intelligence in defense and surveillance. As the six-month phase-out begins, the outcome of any legal challenges could shape future collaboration between government agencies and AI developers.