Tech

Exclusive: Expert Pen Support for California AI Safety Bill

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


On August 7, a group of renowned professors co-authored a letter urging key lawmakers to support a California AI bill as it enters the final stages of the state’s legislative process. In a letter shared exclusively with TIME, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig and Stuart Russell argue that the next generation of AI systems pose “severe risks” if “developed without sufficient care and oversight” and describe the bill as the “the minimum necessary for effective regulation of this technology.”

The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener in February of this year. It requires AI companies to train large-scale models to perform rigorous security testing for potentially dangerous capabilities and implement comprehensive security measures to mitigate risks.

“There are fewer regulations on AI systems that could pose catastrophic risks than on cafeterias or hairdressers,” the four experts write.

The letter is addressed to the respective leaders of the legislative bodies through which the bill must be approved to become law: Mike McGuire, president pro tempore of the California Senate, where the bill passed in May; Robert Rivas, president of the state assembly, where the project will be voted on later this month; and the state governor, Gavin Newsom, who – if the bill passes the assembly – is expected to sign or veto the proposed legislation by the end of September.

With Congress bottled and Republicans pledging to roll back Biden’s AI executive order if elected in November, California—the world’s fifth-largest economy and home to many of the world’s top AI developers—plays what the authors consider a “ indispensable role” in regulating AI. If approved, the project would apply to all companies operating in the state.

While researches suggest that the project is supported by a majority of Californians, has been subject to stiff opposition from industry groups and technology investors, who claim it would stifle innovation, harm the open source community, and “let China take the lead in developing AI.” Venture capital firm Andreessen Horowitz has been particularly critical of the bill, creating a website which urges citizens to write to the opposition legislature. Others, such as a startup incubator YCombinatorChief AI Scientist at Meta Yann LeCunand professor at Stanford Fei-Fei Li (whose new $1 billion startup received financing of Andreessen Horowitz) also spoke out in their opposition.

Resistance centered on provisions in the bill that would oblige developers to provide reasonable assurances that an AI model would not pose an unreasonable risk of causing “critical harm,” such as helping to create weapons of mass destruction or cause serious damage to critical areas. the infrastructure. The bill would only apply to systems that cost more than $100 million to train and are trained using an amount of computing power above a specified threshold. These dual requirements imply that the project would likely only affect the largest AI developers. “No currently existing system would be classified,” Lennard Heim, a researcher at the RAND Corporation’s Center for Technology and Security Policy, told TIME in June.

“As some of the experts who most understand these systems, we can confidently say that these risks are likely and significant enough to make security testing and common-sense precautions necessary,” the letter’s authors write. Bengio and Hinton, who previously supported the bill, are both Turing Award winners and often called the “godfathers of AI”, alongside Yann LeCun. Russell wrote a book—Artificial Intelligence: A Modern Approach-that is widely considered to be the standard textbook on AI. And Lessig, a Harvard Law professor, is widely considered a founding figure of Internet law and a pioneer in the free culture movement, having founded Creative Commons and written influential books on copyright and technology law. In addition to the risks mentioned above, they cite among their concerns the risks posed by autonomous AI agents that could act without human supervision.

See more information: Yoshua Bengio is on the 2024 TIME100 list

“I worry that technology companies will not resolve these significant risks on their own while they are locked in their race for market share and maximizing profits. That’s why we need some rules for those on the frontier of this race,” Bengio told TIME in an email.

The letter rejects the notion that the bill would harm innovation, stating that, as written, the bill only applies to the largest AI models; what great AI developers have already done voluntary commitments carry out many of the security measures outlined in the bill; and that similar regulations in Europe and China are in fact more restrictive than SB 1047. It also praises the bill for its “robust whistleblower protections” for AI lab employees who report safety concerns, which are increasingly most viewed as necessary given away reports of reckless behavior on the part of some laboratories.

In an interview with Vox Last month, Senator Wiener noted that the bill has already been amended in response to criticism from the open source community. The current version exempts original developers from shutdown requirements when a model is no longer under their control and limits their liability when others make significant modifications to their models, effectively treating significantly modified versions as new models. Despite this, some critics I believe the bill would require open source models to have a “kill switch”.

“Relative to the scale of risks we face, this is a remarkably light-hearted piece of legislation,” the letter says, noting that the bill does not have a licensing regime nor does it require companies to receive permission from a government agency before training a model and depends on self-assessments of risk. The authors further write: “It would be a historic error to eliminate the basic measures of this bill.”

Via email, Lessig adds “Governor Newsom will have the opportunity to establish California as a national pioneer in AI regulation. Legislation in California would meet an urgent need. With a critical mass of leading AI companies headquartered in California, there is no better place to take the lead in regulating this emerging technology.”





This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss

Victoria Beckham stepped up her style on the famous Met Gala stairs – The US Sun

Victoria Beckham stepped up her style on the famous Met Gala stairs – The US Sun

TOGETHER, Phoebe and Victoria represented the British in style with
Fears Anthony Joshua x Dubois will end with Brit ‘waking up in an ambulance’

Fears Anthony Joshua x Dubois will end with Brit ‘waking up in an ambulance’

A FORMER world champion reckons Anthony Joshua’s clash with Daniel