top of page

AI Safety Standards Found Lacking by Institute Study

  • Writer: tech360.tv
    tech360.tv
  • 3 hours ago
  • 2 min read

Major artificial intelligence companies, including Anthropic, OpenAI, xAI, and Meta, have safety practices that fall "far short of emerging global standards," according to a new edition of the Future of Life Institute's AI safety index. An independent panel of experts conducted the safety evaluation.


ree

The institute noted that companies were racing to develop superintelligence, yet none possessed a robust strategy for controlling such advanced systems. This evaluation emerges amid heightened public concern regarding the societal impact of smarter-than-human systems.


These systems are capable of reasoning and logical thinking, and several cases of suicide and self-harm have been tied to AI chatbots. Max Tegmark, an MIT professor and Future of Life president, stated that US AI companies remain less regulated than restaurants.


Tegmark added that these companies continue lobbying against binding safety standards, despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm. The Future of Life Institute is a nonprofit organisation founded in 2014.


Credit: TESLA
Credit: TESLA

The institute, which received early support from Tesla Chief Executive Officer Elon Musk, raises concerns about the risks intelligent machines pose to humanity. In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent artificial intelligence.


They advocate for this ban until the public demands it and science paves a safe way forward. A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models become more advanced.


xAI said "Legacy media lies", in what seemed to be an automated response. An OpenAI spokesperson affirmed the company shares its safety frameworks, evaluations, and research to help advance industry standards.


The OpenAI spokesperson also noted the company continuously strengthens its protections to prepare for future capabilities, invests heavily in frontier safety research, and "rigorously" tests its models. Anthropic, Meta, Z.ai, DeepSeek, and Alibaba Cloud did not offer a statement on the study.

  • Major AI companies’ safety practices are "far short of emerging global standards."

  • Companies mentioned include Anthropic, OpenAI, xAI, and Meta.

  • The Future of Life Institute's study found companies lack robust control strategies for superintelligence.


Source: REUTERS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page