OpenAI to Invest in AI Safety Research to Ensure Human Protection
Updated: Jan 2
[Edited] OpenAI announces significant investment and the creation of a research team to guarantee the safety of artificial intelligence for humans.
OpenAI, the creator of ChatGPT, revealed its plans on Wednesday to allocate substantial resources and establish a dedicated research team to maintain the safety of its artificial intelligence.
The aim is to eventually employ AI to supervise itself, according to an official blog post by OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike. They expressed concerns about super-intelligence posing a threat to humanity, emphasizing the urgent need for solutions to control and steer potentially rogue AI.
Anticipating the arrival of superintelligent AI within this decade, the blog post highlights the necessity for enhanced techniques that surpass current capabilities in order to effectively manage these systems. The authors emphasize the importance of breakthroughs in alignment research, which focuses on ensuring AI remains beneficial to humans.
The primary objective of the Superalignment team is to develop a researcher with "human-level" AI alignment capabilities and subsequently leverage substantial compute power to scale this achievement. OpenAI plans to train AI systems using human feedback, employ AI systems to assist in human evaluation and ultimately utilise AI systems to conduct alignment research.
However, AI safety advocate Connor Leahy has raised concerns about the flaws in this plan. Leahy argues that without resolving alignment issues prior to developing human-level AI, the initial AI could potentially cause havoc before being compelled to address AI safety problems. He states, "You have to solve alignment before you build human-level intelligence, otherwise by default you won't control it." Leahy expresses skepticism about the effectiveness and safety of OpenAI's approach.
The potential risks associated with AI have garnered significant attention from AI researchers and the general public alike. In April, a group of industry leaders and experts in AI signed an open letter calling for a six-month pause in the development of systems surpassing the power of OpenAI's GPT-4 due to potential societal risks. A Reuters/Ipsos poll in May revealed that over two-thirds of Americans express concerns about the potential negative effects of AI, with 61 percent believing it could pose a threat to civilization.
OpenAI will invest significant resources and create a new research team to ensure the safety of its AI systems.
Concerns are raised about the potential dangers of super-intelligent AI surpassing human control.