AI labs ease safety rules amid race pressure
By Ina Fried
Published on March 3, 2026.
As AI companies increasingly adapt to "race conditions" in the race to be first, some have warned that this shift could lead to catastrophe as large language models become more powerful and predictable. Google DeepMind CEO Demis Hassabis has warned that such pressure can lead to reckless decisions as the world nears superhuman AI. Anthropic, the most safety-focused major AI lab, has revised a key safeguard, limiting the conditions under which it would delay developing or releasing a model that could pose a significant risk. This comes amid a dispute with the Trump administration after Anthropic refused to allow its models to be used for autonomous weapons or domestic surveillance. The Defense Department responded by cutting use of Claude and labeling the firm a supply chain risk. The Future of Life Institute founder, Max Tegmark, argues that if companies had pushed to turn voluntary commitments into law, the race dynamic might have escalated. He also suggests that while global AI summits increasingly focus on commercialization over guardrails, there may be signs of potential regulation.
Read Original Article