Tech

Officials say OpenAI and Google DeepMind are hiding dangers

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter on Tuesday warning against the dangers of advanced AI, as they claim companies are prioritizing financial gains while avoiding oversight.

Thirteen employees, eleven of whom are current or former employees of OpenAI, the company behind ChatGPT, signed the letter titled: “A right to warn about advanced artificial intelligence”. O two other signatories are current and former Google DeepMind employees. Six individuals are anonymous.

The coalition warns that AI systems are powerful enough to cause serious harm without adequate regulation. “These risks range from the deepening of existing inequalities, to manipulation and disinformation, to the loss of control of autonomous AI systems, potentially resulting in human extinction,” says the letter.

See more information: Exclusive: US must act “decisively” to prevent “extinction-level” threats from AI, says government-commissioned report

“We are proud of our track record of delivering the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said OpenAI spokesperson Lindsey Held. told New York Times. “We agree that rigorous debate is crucial given the importance of this technology, and we will continue to collaborate with governments, civil society and other communities around the world,”

Google DeepMind has not commented publicly on the letter and did not respond to TIME’s request for comment.

The leaders of the three major AI companies – OpenAI, Google DeepMind and Anthropic – have spoken about the risks in the past. “If we build an AI system that is significantly more competent than human experts, but that pursues goals that conflict with our best interests, the consequences could be dire… rapid progress in AI would be very disruptive, altering employment, macroeconomics and power structures… [we have already encountered] toxicity, bias, unreliability, dishonesty,” AI research and security company Anthropic he said in a March 2023 statement, which is linked in the letter. (One of the letter’s signatories who currently works at Google DeepMind used to work at Anthropic.)

See more information: Inside Anthropic, the AI ​​company betting that security can be a winning strategy

The group behind the letter claims that AI companies have information about the risks of the AI ​​technology they are working on, but because they are not obligated reveal much to governments, the real capabilities of their systems remain secret. That means current and former employees are the only ones who can hold companies accountable to the public, they say, and yet many have their hands tied by confidentiality agreements that prevent workers from publicly expressing their concerns. “Common whistleblower protections are insufficient because they focus on illegal activities, while many of the risks we are concerned about are still unregulated,” the group wrote.

“Employees are an important line of security defense, and if they cannot speak freely without reprisal, that channel will be closed,” Lawrence Lessig, a pro bono lawyer for the group, told the New York Times. Times.

The letter writers made four demands of advanced AI companies: stop forcing employees into agreements that prevent them from criticizing their employers for “risk-related concerns,” create an anonymous process for employees to raise concerns with board members, and other relevant regulators or organizations, support a “culture of open criticism” and do not retaliate against former and current employees who share “sensitive risk-related information after other processes have failed.”

Governments around the world have taken steps to regulate AI, although progress lags behind the speed at which AI is progressing. Earlier this year, the EU passed the world’s first comprehensive legislation on AI. International cooperation efforts were continued through AI Security Summits in the United Kingdom and South Korea, and at the UN in October 2023. President Joe Biden signed an AI executive order that, among other things, requires that AI companies disclose their development and security testing plans to the Department of Commerce. However, these disclosures are not required to be made public – potentially preventing the broader oversight that the letter’s signatories desire.

-With additional reporting by Will Henshall/Washington

More from TIME



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

Google Gemini Voice Chat Mode Is Here

August 13, 2024
Google is launching a new voice chat mode for Gemini called Gemini Live, the company announced at its Pixel 9 event today. Available to Gemini Advanced subscribers, it
1 2 3 9,595

Don't Miss

What to know about White Sox interim manager Grady Sizemore

What to know about White Sox interim manager Grady Sizemore

What to know about White Sox interim manager Grady Sizemore
Drew Barrymore says this micellar water is the best makeup remover

Drew Barrymore says this micellar water is the best makeup remover

Bioderma put micellar water on the map. Many brands followed