AI Company’s ‘Ethical Guidelines’ And Identity Are Collapsing Due To The Pressures Of Military Demands

Anthropic, the artificial intelligence company founded on a promise to build the world’s most powerful and safest AI, faces a critical moment as its core identity crumbles under pressure from the Pentagon.

With less than 48 hours until a Friday deadline, the company must decide whether it can maintain its ethical red lines when confronted by the full weight of the United States military apparatus.

The Amadea siblings established Anthropic in 2021 after leaving OpenAI, believing their former employer wasn’t taking AI safety seriously enough. For five years, the company’s Responsible Scaling Policy represented a public commitment to halt AI development if serious risks emerged. That foundational principle has now disappeared.

The crisis began two weeks ago when Claude, Anthropic’s chatbot, was used in a covert military operation. The Joint Special Operations Command conducted a raid in Caracas, Venezuela, to capture former president Maduro.

Wall Street Journal and Axios revealed that Claude, working through Anthropic’s partnership with defense contractor Palantir, played a role in the assault. This meant Claude was deployed for military operations, precisely the scenario Anthropic’s policies were designed to prevent.

When an Anthropic employee questioned whether Claude had been used in the operation during a meeting with Palantir, a Palantir executive reportedly interpreted this as disapproval and escalated the matter to the Pentagon.

Pentagon spokesman Sean Parnell confirmed the tension, stating that “our nation requires that our partners be willing to help our war fighters win in any fight.”

On February 19th, Pentagon CTO Michael urged companies to “cross the Rubicon on military AI use,” a reference to Caesar’s irreversible commitment to military action. The message was clear: there is no room for safety-first identities when it comes to defense partnerships.

The pressure intensified when Defense Secretary Pete Hegseth released a document in early January requiring all defense contractors to eliminate company-specific safety restrictions. The directive mandated that AI companies must permit any lawful use of their technology by the government, with 180 days to comply. That deadline expires Friday afternoon.

Yesterday morning, Dario Amadea, Anthropic’s main founder, was summoned to meet with Hegseth at the Pentagon. The Pentagon holds three weapons against Anthropic: the Defense Productions Act, which can force private companies to accept defense contracts; supply chain risk designation, which would blacklist Anthropic from federal networks; and contract cancellation, clawing back up to $200 million.

Anthropic has maintained it will not support autonomous weapons that use AI for final targeting decisions without human oversight, nor will it enable domestic surveillance of American citizens. The Pentagon’s position is simple: if a use is lawful, the government decides, not Anthropic.

On the same day Amadea met with Hegseth, Anthropic released RSP 3.0, a revised Responsible Scaling Policy. The flagship pledge that defined the company since 2023 has vanished.

Previously, Anthropic committed never to train an AI model unless it could guarantee adequate safety measures in advance. The new policy replaces this hard limit with softer conditions: the company will only consider pausing development if both it leads the AI race significantly and catastrophic risk is deemed material.

This means if competitors deploy potentially dangerous models, Anthropic will follow suit rather than hold back. Chief Science Officer Jared Kaplan explained to Time magazine: “We felt that it wouldn’t actually help anyone for us to stop training AI models. We didn’t really feel with the rapid advance of AI that it made sense for us to make unilateral commitments if the competitors are blazing ahead.”

Chris Painter, director of policy at Meter, reviewed the new RSP and concluded that “the change shows that Anthropic believes that it needs to shift into triage mode with its safety plans because methods to assess and mitigate risk are not keeping up with the pace of capabilities.”

While Anthropic insists the policy revision was long in development and unrelated to Pentagon pressure, critics see it as preemptive surrender.

Friday’s deadline will determine not just whether Anthropic is willing to hold its ethical lines, but whether it possesses the power to do so against an entity that can legally seize what it needs for national defense.