"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social.
Trump's directive came during a weeks-long feud between the Pentagon and the San Francisco-based startup over concerns about how the military could use AI at war.
Spokespeople for Anthropic, which has a $US200 million contract with the Pentagon, did not immediately respond to a request for comment.
Trump's decision stopped short of threats issued by the Pentagon, including that it could invoke the Defence Production Act to require Anthropic's compliance. The Pentagon had also said it considered making Anthropic a supply-chain risk, a designation that previously targeted businesses tied to foreign adversaries.
But Trump vowed further action if Anthropic did not co-operate with the phaseout. Trump warned he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow" if Anthropic did not help in the phaseout period.
The setback comes as AI leader Anthropic raced to win a fierce competition selling novel technology to businesses and government, particularly for national security, ahead of its widely expected initial public offering. The company has said it has not finalised an IPO decision.
At the same time, the battle over technological guardrails had raised concerns that the Department of Defence would follow US law but little other constraint when deploying AI for national-security missions, regardless of safety or ethics service terms embraced by the technology's developers.
Anthropic had sought guarantees that its AI would not be used for fully autonomous weapons or for mass domestic surveillance - applications in which the Pentagon has said it had no interest.
Anthropic was the first frontier AI lab to put its models on classified networks via cloud provider Amazon.com and the first to build customised models for national security customers, the startup has said.
Its product Claude is in use across the intelligence community and armed services.
US Senator Mark Warner, a Democrat and vice chairman of the Select Committee on Intelligence, criticised the action taken by Trump, a Republican.
"The president's directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."
The conflict is the latest eruption in a saga that dates back at least to 2018. That year, employees at Alphabet's Google protested the Pentagon's use of the company's AI to analyse drone footage, straining relations between Silicon Valley and Washington.
A rapprochement ensued, with companies including Amazon and Microsoft jousting for defence business, and still more CEOs pledging co-operation last year with the Trump administration.
But theoretical "killer robots" have remained a concern held by human-rights and technology activists. At the same time, Ukraine and Gaza have become theatres for increasingly automated systems on the battlefield.