Politics

AI whistleblowers warn of dangers and call for transparency

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram



A group that describes itself as “current and former Frontier AI employees [artificial intelligence] companies” called for more transparency and protection for whistleblowers in the sector in a Letter Tuesday.

“Until there is effective government oversight of these companies, current and former employees will be among the few people who can hold them accountable to the public,” the letter says. “However, broad confidentiality agreements prevent us from expressing our concerns except to the companies themselves who may not be addressing these issues.”

The letter is signed by self-described current and former employees of companies including OpenAI, Google’s DeepMind and Anthropic, with six of them choosing to remain anonymous.

“Common whistleblower protections are insufficient because they focus on illegal activities, while many of the risks we are concerned about are still unregulated,” the letter continues. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or talk about these issues.”

The letter specifically requests that “advanced AI companies” follow “principles” including not entering into “any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns” and being in favor of “a culture of critical openness.” “

“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers and the public,” the letter continues. “However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe that customized corporate governance structures are sufficient to change this situation.”

OpenAI announced last week that it was establishing a security panel to make recommendations to its board on “critical safety and security decisions,” led by its CEO Sam Altman alongside directors Adam D’Angelo, Nicole Seligman and Bret Taylor.

OpenAI is “proud of our track record of delivering the most capable and safest AI systems and believes in our scientific approach to addressing risk,” a company spokesperson told The Hill in an email statement.

“We agree that rigorous debate is crucial given the importance of this technology and we will continue to collaborate with governments, civil society and other communities around the world,” the spokesperson continued.



This story originally appeared on thehill.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss