The AI tech company Anthropic was recently designated a supply chain risk by Secretary of Defense Pete Hegseth, due to a fundamental disagreement in how Anthropic’s AI model, Claude, would be deployed. Anthropic had previously supported the Department of Defense by  deploying their chatbots and other models for intelligence gathering and data analysis in classified government servers. However, after a post by the CEO of Anthropic, Dario Amodei, condemned the use of AI in domestic surveillance and fully autonomous capabilities, the company’s technology was kicked from government use.

Anthropic had previously signed a $200 million dollar contract with the DOD in July of 2025. This included use of the AI model in classified servers. 

Amodei started his career in AI in 2015. He moved from company to company before landing at Open AI in 2016. He and his sister, Daniela, believed that AI had great potential to change the world. 

“Many of the implications of powerful AI are adversarial or dangerous,” Amodei said in 2024, “but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead.”

Both Amodei and his sister left Open AI in 2020 with the goal of making a more ethical AI model with more oversight. Unlike other models, Claude is governed by a constitution. The laws set forward by Anthropic emphasize being broadly safe, being broadly ethical, being compliant with Anthropic’s guidelines, and being genuinely helpful. 

In order to fulfill this mission, Amodei issued a statement saying that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” 

The statement argued against “using these systems for mass domestic surveillance,” saying that the law has not yet caught up to current AI capabilities. Amodei defended AI use abroad, but drew the line at domestic mass surveillance. Anthropic also doubted the capability of an AI driven weapons system, stating that chatbots like Claude are “simply not reliable enough to power fully autonomous weapons.”

These complaints were reasonable, as chatbots have been known to “hallucinate” (create false or incorrect information and present it as factual) and their findings are based on data scraped from social media websites that are not exactly known for their reasoned and intelligent debates. 

Local AI-Cultural scholar and South junior Andrew Schram had thoughts.

 “AI lacks many human capabilities in decision making, especially in complex, no-win situations such as warfare,” Schram said.

In response to Amodei’s refusal to aid fully autonomous weaponry systems, Secretary of Defence Pete Hegseth deemed Anthropic a supply chain risk. This means that to contract out work from the Department of Defence, companies must sign an agreement swearing not to use Anthropic’s products such as Claude. 

Despite this order, US command in the Middle East used Claude during the recent strikes on Iran. 

President Trump has frequently met with tech companies during his second term, but notably not Anthropic. Amodei was not present at Trump’s inauguration, despite the presence of Meta CEO Mark Zuckerberg, Amazon CEO Jeff Bezos, Google’s CEO Sundar Pichai, and Tesla founder Elon Musk.

AI specialist in the White House David Sacks had cited Anthropic’s support of regulation in the AI industry as a way to corner the market. He called the company “woke” and has opposed state-level regulation of AI. 

Open AI, whose founder was one of the largest individual donors to Trump, will be taking over much of the vacuum left by Anthropic.

Article by Emmett Coughlan