- tech360.tv
Regulators Calculate AI Power to Determine Safety Measures
Regulators assess AI power to determine safety measures. Computational capacity key in setting regulatory thresholds. Debate continues on the effectiveness of metrics in gauging AI risks.
The focus lies on the computational capacity of AI models, with a key figure being 10 to the 26th floating-point operations per second. This metric, equivalent to 100 septillion calculations per second, is a key factor in determining regulatory actions in the United States and California.
Lawmakers and AI safety advocates see this computational prowess as a potential enabler for advanced AI technologies to create weapons of mass destruction or carry out catastrophic cyberattacks. Regulations, while imperfect, seek to distinguish current high-performing AI systems from future, potentially more powerful ones, particularly those developed by Anthropic, Google, Meta Platforms, and OpenAI.
Critics see these thresholds as arbitrary attempts to regulate mathematical concepts. The debate heats up as regulations, such as California's proposed AI safety legislation, establish criteria based on computational power and development costs. President Biden's executive order and the EU's AI Act both consider computational capabilities when determining the need for regulatory safeguards.
There are no publicly available models that meet the higher California standard, but some companies have likely begun to develop them. If this is the case, they should share specific details and safety precautions with the US government. Biden used a Korean War-era law to require technology companies to notify the United States. If the Commerce Department is creating these artificial intelligence models.
AI researchers are still debating how to best evaluate the capabilities of the most recent generative AI technology and compare it to human intelligence. There are tests that evaluate AI's ability to solve puzzles, reason logically, and predict which text will respond to a user's chatbot inquiry. These measurements help to determine an AI tool's usefulness for a specific task, but there is no simple way to determine which is so capable that it poses a threat to humanity.
Physicist Anthony Aguirre, executive director of the Future of Life Institute, emphasised the importance of floating point arithmetic when evaluating AI models. He explained that, despite sounding fancy, it is simply adding or multiplying numbers, providing a simple way to assess an AI model's capability and risk.
Computer scientist Sara Hooker, who leads AI company Cohere's nonprofit research division, criticised the use of compute thresholds as a risk proxy, claiming that such metrics have "no clear scientific support". Horowitz and Andreessen, venture capitalists, have also expressed concerns about AI regulations that could stymie the AI startup industry's growth.
State Senator Scott Wiener of San Francisco defended California's legislation, arguing that regulating at over 10 to the 26th flops is a way to exempt models that lack the ability to cause critical harm from safety testing requirements. Both Wiener and the Biden executive order see the metric as temporary and subject to change in the future.
According to Yacine Jernite of Hugging Face, the flops metric, while initially well-intentioned, is becoming outdated as AI developers create more impactful models with less computing power. He proposed that different models be held to varying standards based on their societal impact, highlighting the importance of adaptable regulations as AI systems evolve.
Aguirre acknowledged the need for regulators to be adaptable, but he criticised opposition to the flops threshold as an attempt to avoid AI system regulation as they advanced. He emphasised the importance of not completely disregarding regulation and hoping for the best amidst rapid advances in AI technology.
Regulators assess AI power to determine safety measures
Computational capacity key in setting regulatory thresholds
Debate continues on the effectiveness of metrics in gauging AI risks
Source: AP NEWS