Back to all posts

Anthropic vs. The Pentagon

March 21, 20265 min read

Anthropic vs. The Pentagon

Not What I Wanted

I want to start by saying this is not exactly what I wanted to write about. AI philosophy is a big topic I've wanted to talk about on this blog. Mostly, I wanted to have a platform to get out my own thoughts on certain topics such as AI consciousness or the existential threat of AI. Instead, I get to talk about politics, which I hate and do not feel I have enough knowledge to have a strong opinion on. However, this topic has been very current and while there are other topics I prefer to talk about, I do believe this is a very important moment.


What Happened?

To give a very brief rundown, this started when Dario Amodei, the CEO of Anthropic, released a statement called "Our Discussions with the Department of War." He basically said that current AI was too unreliable for autonomous weapons and refused to let Claude be used for "mass surveillance." The Pentagon did not like that and gave them an ultimatum to either agree to their terms or they would be labeled a supply chain risk, a designation that has never been given to a U.S. company. On March 5th, Pete Hegseth officially labeled Anthropic a supply chain risk and ordered all military operations to stop using Claude within 180 days. Anthropic has since filed two lawsuits against the Department of War calling this "illegal retaliation." Just to clarify what "supply chain risk" means — no government agency can use Claude in any way. This also applies to any subcontractor doing work for the government. So the implications of this could be very large.


My Thoughts

Upon first hearing the news, I immediately sided with Anthropic. I believe this is a great overreach by our government, at least from the mass surveillance side of things. I do not think it is right nor fair to label a company a supply chain risk just because the company didn't like the contract that the government drew up. Obviously, I haven't read it, but it seems to me that the contract had loopholes in place — "when they deem appropriate" — to disregard Claude's guidelines. When Anthropic stood up to the Department of War and publicly called them out, it was awesome, for lack of a better word, in my opinion. So far, Anthropic has been very open about their intentions with AI and with their work regarding safety. Now, before everyone argues that I have no idea what they do behind the scenes and that every company has some agenda, Anthropic has tried very hard to be upfront and open, which is more than we can say about a lot of companies.


Devil's Advocate

I will not write this from only one side, and I will say that shortly after hearing about this I did doubt my immediate thoughts as being true. I think the biggest point against Anthropic is the sheer fact that an unelected official, Dario, would have a lot of control over the Department of War. The foundation of our country is based on our choice to elect officials to run our country — in the Secretary of War's case, an official we elected appoints that position. We put our trust in a few to run this country and now the CEO of a tech company is trying to decide what the Department of War does, or at least limit it. This is one of the core issues with AI in general. One of the most powerful technologies in the world is run by a handful of companies who mostly answer to one person: their CEO. I understand the government's decision to fight back because, at the end of the day, our country's safety has been put in their hands, and having a tech CEO undermine them is not a good look. They definitely could have handled it better, and at the end of the day, the supply chain risk designation goes way too far and sets a bad precedent for future companies working with the government.


Closing Remarks

Unfortunately, Dario is in a very uncomfortable position. On one hand, it's great he took a stand, but the supply chain risk could have huge, long-term implications that I don't even think the current administration has thought through. On the other hand, his whole reason for starting Anthropic is because he knows everyone is going to race full speed for superintelligence — he believes if he gets there first, their model will be safer. In this instance, his refusing and calling out the government might mean another company steps up, but they might not have the morals or the backbone to do what is best. It is not a position I hope to ever be in, but in the end, I still stand by his resolve to not give in. I do not believe that this administration will be honest with how they use AI, and with it, mass surveillance could become a serious problem.


As always, these are just my thoughts. I'd love to hear yours.