Anthropic pulls back from altered Pentagon AI deal
Anthropic has declined to proceed with a revised artificial intelligence contract offered by the US Department of Defense, citing concerns that changes to the agreement would weaken safeguards tied to its core safety commitments and restrictions on military applications. The San Francisco-based company, founded by former OpenAI executives including Dario Amodei and Daniela Amodei, confirmed it would not accept amendments to a previously negotiated arrangement that it […] The article Anthropic pulls back from altered Pentagon AI deal appeared first on Arabian Post.
Anthropic has declined to proceed with a revised artificial intelligence contract offered by the US Department of Defense, citing concerns that changes to the agreement would weaken safeguards tied to its core safety commitments and restrictions on military applications.
The San Francisco-based company, founded by former OpenAI executives including Dario Amodei and Daniela Amodei, confirmed it would not accept amendments to a previously negotiated arrangement that it believed diluted provisions aligned with its published safety framework. The decision comes amid heightened scrutiny of how leading AI developers engage with defence agencies and how strictly they adhere to self-imposed limits on military use.
Anthropic has positioned itself as a proponent of “constitutional AI”, a training approach designed to embed explicit principles into large language models. Since its launch in 2021, the company has published detailed responsible scaling policies and set out red lines around uses that could enable mass surveillance, autonomous weapons targeting or other activities that raise legal and ethical risks. Its most advanced models, marketed under the Claude brand, are used by enterprises across finance, technology and public services.
According to people familiar with the matter, the Pentagon had sought adjustments to contract language governing model deployment, data handling and the scope of potential defence-related use cases. While Anthropic did not disclose the precise clauses in dispute, it stated that any government engagement must remain consistent with its public commitments on safety and human oversight.
A spokesperson said the company supports national security work in areas such as cyber defence, logistics and back-office efficiency, but would not agree to terms that “expand permissible uses beyond our stated policy boundaries”. The Pentagon has declined to comment on specific vendor negotiations but maintains that all AI procurement complies with its Responsible AI Strategy and the Department’s ethical principles for AI adopted in 2020.
The disagreement emerges at a time when US defence agencies are accelerating the adoption of generative AI for intelligence analysis, operational planning and administrative automation. The Chief Digital and Artificial Intelligence Office has increased funding for pilot projects and partnerships with private sector firms, seeking to harness large language models while maintaining compliance with international humanitarian law.
Several major technology groups have recalibrated their stance on defence work in recent years. Microsoft and Amazon Web Services have longstanding cloud contracts with the Department of Defense, while Palantir has deep ties to military and intelligence clients. Google, after employee protests over its involvement in Project Maven in 2018, introduced AI principles that restrict certain weapons-related uses but continues to supply cloud and AI services to government agencies.
Anthropic’s position reflects broader tensions within the AI industry about the balance between commercial opportunity and ethical restraint. The company has raised billions of dollars from investors including Amazon and Google, which have integrated its models into cloud offerings. It competes with OpenAI, whose GPT models underpin products used by both civilian and government customers, and with emerging players such as Mistral and Cohere.
Debate has intensified following reports that Anthropic updated elements of its acceptable use policy and responsible scaling framework over the past year, clarifying how its systems could be deployed in national security contexts. Critics argue that even carefully worded exceptions risk mission creep, especially as generative models become more capable of analysing intelligence data, drafting operational plans or supporting autonomous systems.
Supporters counter that engagement with defence institutions can improve safety by ensuring that advanced AI tools are subject to oversight and aligned with democratic norms rather than developed in secrecy elsewhere. They note that the US government has emphasised human-in-the-loop requirements and accountability structures for any lethal applications.
Academic researchers specialising in AI governance observe that contract language plays a critical role in translating high-level principles into enforceable obligations. Clear definitions of prohibited use, audit rights, data retention limits and model update controls can determine whether safeguards are meaningful in practice. They also highlight the difficulty of policing downstream uses once a model is integrated into complex defence systems.
Anthropic’s leadership has repeatedly warned about the risks of powerful AI systems if deployed without adequate controls. Dario Amodei has argued publicly for stronger transparency standards, model evaluations and, in some cases, export controls to manage the proliferation of advanced AI capabilities. The company has invested heavily in alignment research and red-teaming exercises intended to stress-test its models against misuse.
The Pentagon, for its part, faces pressure to modernise rapidly in response to technological competition from China and other states. Officials have described artificial intelligence as central to maintaining operational advantage, particularly in areas such as predictive maintenance, cyber operations and intelligence fusion. Budget documents show sustained increases in AI-related spending across the armed services.
The article Anthropic pulls back from altered Pentagon AI deal appeared first on Arabian Post.
What's Your Reaction?



