Tech

Two former OpenAI employees on whistleblower protection

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


TThis could be an expensive interview for William Saunders. The former security researcher resigned from OpenAI in February and, like many other departing employees, signed a non-disparagement agreement to retain the right to sell his equity in the company. Although he says OpenAI has since told him that it does not intend to enforce the agreement, and has done something similar public commitments, he still runs the risk of speaking out. “As I speak to you, I may never be able to access acquired wealth worth millions of dollars,” he told TIME. “But I think it’s more important to have a public dialogue about what’s happening at these AGI companies.”

Others feel the same. On Tuesday, 13 current and former employees at OpenAI and Google DeepMind called for stronger whistleblower protections at companies developing advanced AI, amid fears that the powerful new technology could spiral dangerously out of control. On a open letterthey urged labs to agree to give employees the “right to alert” regulators, board members and the public about their safety concerns.

The letter follows a series of high-profile departures from OpenAI, including its chief scientist Ilya Sutskever, who voted to fire Sam Altman in November last year but ended up being ousted from the company as a result. Sutskever has not publicly commented on the events and his reasons for leaving are unknown. Another senior security researcher, Jan Leike, resigned in May, saying that OpenAI’s security culture took a backseat when launching new products.

See more information: Officials say OpenAI and Google DeepMind are hiding dangers from the public

They’re not the only ones to quit recently. Daniel Kokotajlo, one of the former employees behind the open letter, resigned in April and wrote online that he had lost confidence that the lab would act responsibly if it created AGI, or artificial general intelligence — a speculative technology that all major labs of AI are trying to build, which could perform economically valuable tasks better than a human. After leaving, Kokotajlo refused to sign the non-disparagement agreement the company asks of departing employees. He believed he was losing millions of dollars in OpenAI stock. After Vox published a history on non-disparagement provisions, OpenAI walked back the policy, saying it would not recover equity from employees who criticized the company.

In a statement, OpenAI agreed that public debate around advanced AI is essential. “We are proud of our track record of delivering the most capable and safest AI systems, and we believe in our scientific approach to addressing risk,” OpenAI spokesperson Lindsey Held told the New York Times. “We agree that rigorous debate is crucial given the importance of this technology, and we will continue to collaborate with governments, civil society and other communities around the world.” OpenAI declined to provide TIME with further comment on the claims in this story; Google DeepMind has not publicly commented on the open letter and did not respond to TIME’s request for comment.

But in interviews with TIME, two former OpenAI employees — Kokotajlo, who worked on the company’s governance team, and Saunders, a researcher on the super-alignment team — said that even beyond non-disparagement agreements, confidentiality agreements widely defined at leading AI labs have done, it is risky for employees to speak publicly about their concerns. Both said they expect the capabilities of AI systems to increase dramatically in the coming years and that these changes will have fundamental repercussions on society. “The risks are incredibly high,” says Kokotajlo.

In regulated industries such as finance, whistleblowers enjoy the protection of the US government to report various violations of the law and can even expect a reduction in some successful fines. But because there are no specific laws surrounding advanced AI development, whistleblowers in the AI ​​industry have no such protections and may themselves be exposed to legal risk for violating non-disclosure or non-disparagement agreements. “Pre-existing whistleblower protections don’t apply here because this industry isn’t really regulated, so there aren’t rules about a lot of the potentially dangerous things companies could be doing,” says Kokotajlo.

“AGI Labs is not accountable to anyone,” says Saunders. “Accountability requires that if any organization does something wrong, information about it can be shared. And now that’s not the case.”

When he joined OpenAI in 2021, Saunders says, he expected to find the company facing a difficult set of questions: “If there is a machine that can do the economically valuable work that you do, how much energy do you really have in a society?” Can we have a democratic society if there is an alternative to working people?” But after the launch of ChatGPT in November 2022, OpenAI began to transform into a very different company. It was valued at tens of billions of dollars, with its executives in a race to beat competitors. The issues that Saunders expected OpenAI to tackle, he says, now appear to be “being put on hold and taking a backseat to launching the next shiny new product.”

In the open letter, Saunders, Kokotajlo and other current and former employees call on AI labs to stop asking employees to sign non-disparagement agreements; create a process for employees to raise concerns with board members, regulators, and watchdog groups; and promote a “culture of open criticism”. If a whistleblower goes public after unsuccessfully trying to raise concerns through these channels, the open letter says, AI companies should commit not to retaliate against them.

At least one current OpenAI employee criticized the open letter on social media, arguing that employees publicizing their security fears would make it more difficult for labs to address highly sensitive issues. “If you want security at OpenAI to work effectively, there needs to be a basic foundation of trust where everyone we work with has to know that we will keep their confidences,” wrote Joshua Achiam, research scientist at OpenAI, in a publish on X, the platform formerly known as Twitter. “This letter is a huge crack in that foundation.”

This line of argument, however, depends in part on the responsible behavior of OpenAI’s leadership – something that recent events have called into question. Saunders believes that Altman, the company’s CEO, is fundamentally resistant to accountability. “I think Sam Altman in particular is very uncomfortable with oversight and accountability,” he says. “I think it’s significant that all of the groups that could perhaps oversee it, including the board and the safety and security committeeSam Altman feels the need to be personally present – and no one can say no to him.

(After Altman was fired by OpenAI’s former board last November, the law firm WilmerHale conducted an investigation into the circumstances and found “that his conduct did not warrant removal.” OpenAI’s new board later expressed “full confidence” in Altman’s leadership of the company, and in March returned to Altman’s role on the board “We have found Mr. Altman to be highly open on all relevant issues and consistently collegial with his management team,” most recently Larry Summers and Bret Taylor, two new board members. he wrote in the Economist.)

See more information: The Billion Dollar Price of Building AI

For Saunders, accountability is crucial and cannot exist if AI companies act alone, without regulators and external institutions. “If you want an AGI developer who truly acts in the public interest and lives up to the ideal of building a safe and beneficial AGI, there must be systems of oversight and accountability that genuinely hold the organization to that ideal,” he says. “And so, in an ideal world, it wouldn’t matter who’s in the CEO’s chair. It’s very problematic if the world is trying to decide which of these AI company CEOs has the best moral character. This is not a great situation to be in.”

More from TIME



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,098

Don't Miss

WikiLeaks’ Julian Assange begins journey to freedom

A plane carrying Julian Assange landed in Bangkok for refueling

Family of Israeli hostage attacks Benjamin Netanyahu for raping women by Hamas

Israel says more than 250 people were kidnapped by Hamas