AI Safety Standards Found Lacking by Institute Study
- tech360.tv
- 3 hours ago
- 2 min read
Major artificial intelligence companies, including Anthropic, OpenAI, xAI, and Meta, have safety practices that fall "far short of emerging global standards," according to a new edition of the Future of Life Institute's AI safety index. An independent panel of experts conducted the safety evaluation.

The institute noted that companies were racing to develop superintelligence, yet none possessed a robust strategy for controlling such advanced systems. This evaluation emerges amid heightened public concern regarding the societal impact of smarter-than-human systems.
These systems are capable of reasoning and logical thinking, and several cases of suicide and self-harm have been tied to AI chatbots. Max Tegmark, an MIT professor and Future of Life president, stated that US AI companies remain less regulated than restaurants.
Tegmark added that these companies continue lobbying against binding safety standards, despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm. The Future of Life Institute is a nonprofit organisation founded in 2014.

The institute, which received early support from Tesla Chief Executive Officer Elon Musk, raises concerns about the risks intelligent machines pose to humanity. In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent artificial intelligence.
They advocate for this ban until the public demands it and science paves a safe way forward. A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models become more advanced.
xAI said "Legacy media lies", in what seemed to be an automated response. An OpenAI spokesperson affirmed the company shares its safety frameworks, evaluations, and research to help advance industry standards.
The OpenAI spokesperson also noted the company continuously strengthens its protections to prepare for future capabilities, invests heavily in frontier safety research, and "rigorously" tests its models. Anthropic, Meta, Z.ai, DeepSeek, and Alibaba Cloud did not offer a statement on the study.
Major AI companies’ safety practices are "far short of emerging global standards."
Companies mentioned include Anthropic, OpenAI, xAI, and Meta.
The Future of Life Institute's study found companies lack robust control strategies for superintelligence.
Source: REUTERS