Anthropic, the AI firm behind the Claude chatbot, initially known for its emphasis on safe technology, seems to be altering its safety priorities to maintain competitiveness. The company recently announced revisions to its responsible-scaling policy, a set of internal standards aimed at averting potential risks from AI development, such as large-scale cyber threats.
While the updated guidelines affirm that Anthropic will still demand a “strong argument that catastrophic risk is contained” during AI development, the company now indicates that progress will only be postponed “until and unless we no longer believe we have a significant lead.” This implies that the company will continue development if it perceives that it is not ahead of competitors.
Anthropic cited shifting concerns about AI safety in the U.S., noting that economic potential has overshadowed safety considerations. The company highlighted the sluggish government response to AI safety, with a greater focus on enhancing competitiveness and economic growth instead of safety discussions at the federal level.
This adjustment in safety guidelines by Anthropic coincides with the Pentagon’s threat to terminate contracts unless the company allows its technology for all lawful military purposes. However, Anthropic asserts that the guideline change is unrelated to this specific issue.
Founded in 2021 by ex-OpenAI employees concerned about safety prioritization, Anthropic has always emphasized safety as a core focus. CEO Dario Amodei has expressed concerns about AI’s negative impacts, emphasizing safety as the company’s top priority in recent interviews.
The company’s blog post highlighted the ongoing enhancements to its safety practices, aiming to boost transparency and accountability by committing to regular reports and safety objectives. Despite its safety-first image, Heidy Khlaaf, chief AI scientist at the AI Now Institute, believes Anthropic has fallen short in preventing potential harm from current AI applications.
Khlaaf indicated that Anthropic’s safety approach has focused excessively on future catastrophic events rather than addressing the harm posed by current AI technologies, like chatbot errors. The Claude chatbot, previously misused in fraudulent activities and data theft, has raised concerns about Anthropic’s safety commitments.
In response to the evolving competitive landscape among major AI firms like Anthropic, OpenAI, and Google, the U.S. government’s strong support for AI development poses challenges for companies to prioritize safety. Teresa Scassa, from the University of Ottawa, highlighted the dilemma for Canadian companies, noting that stringent regulations may hinder AI innovation domestically.
Since the failure of Canada’s Artificial Intelligence and Data Act in 2025, both Canada and the U.S. have refrained from imposing broad AI regulations. The Pentagon’s pressure on Anthropic regarding military technology usage aligns with the company’s recent safety policy update, demonstrating a complex interplay between government demands and company priorities.
