News

OpenAI claims AI is “safe enough” as scandals raise concerns

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Sam Altman insisted that OpenAI has done “a lot of work” to ensure the security of its models.

Seattle:

OpenAI CEO Sam Altman has defended his company’s AI technology as safe for widespread use, as concerns grow about the potential risks and lack of adequate safeguards for ChatGPT-like AI systems.

Altman’s comments came at a Microsoft event in Seattle, where he spoke to developers as a new controversy erupted over an OpenAI AI voice that closely resembled that of actress Scarlett Johansson.

The CEO, who rose to global prominence after OpenAI launched ChatGPT in 2022, is also facing questions about the company’s AI security following the departure of the team responsible for mitigating long-term AI risks.

“My biggest piece of advice is that this is a special time and enjoy it,” Altman told the audience of developers looking to build new products using OpenAI technology.

“This is not the time to put off what you are planning to do or wait for the next thing,” he added.

OpenAI is a close partner of Microsoft and provides the core technology, primarily the GPT-4 large language model, for building AI tools.

Microsoft has joined the AI ​​bandwagon, launching new products and encouraging users to embrace the generative capabilities of AI.

“We kind of take it for granted” that GPT-4, while “far from perfect… is generally considered robust and secure enough for a wide variety of uses,” Altman said.

Altman insisted that OpenAI has done “a lot of work” to ensure the security of its models.

“When you take a medicine, you want to know that it will be safe, and with our model, you want to know that it will be robust to behave the way you want,” he added.

However, questions about OpenAI’s commitment to security resurfaced last week when the company disbanded its “superalignment” group, a team dedicated to mitigating the long-term dangers of AI.

In announcing his departure, team co-lead Jan Leike criticized OpenAI for prioritizing “shiny new products” over security in a series of posts on X (formerly Twitter).

“Over the past few months, my team has sailed against the wind,” Leike said.

“These problems are very difficult to solve and I am concerned that we are not on the right path to get there.”

This controversy was quickly followed by a public statement from Johansson, who expressed outrage over the voice used by OpenAI’s ChatGPT that sounded similar to her voice in the 2013 film “Her.”

The voice in question, called “Sky,” was introduced last week at the launch of OpenAI’s more human-like GPT-4 model.

In a brief statement Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.

(Except the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



This story originally appeared on Ndtv.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss