sCarlett Johannson has gone to war with OpenAI, and in the battle for public opinion, OpenAI is losing — big time.
Last week, OpenAI released an update to its AI chatbot called ChatGPT-4o, which featured a female voice talking to its users. Many people pointed out that the voice, which sometimes sounded turn to flirtingwas eerily similar to Scarlett Johannson’s in the 2013 dystopian sci-fi film Her. (Johannson voices a chatbot that falls in love with the film’s protagonist.) Sam Altman, longtime CEO of OpenAI talked about how much the film inspired the company’s products and even made the connection clear last week when tweeting the title of the movie.
But on Monday, Johannson released a statement saying that OpenAI had asked her to be the voice of the chatbot, and when she declined, they found a similar sound. Johannson said she was “shocked, angry and in disbelief” at the turn of events. The company claimed that her voice was not inspired by her and recorded by a different actor – but it still pulled the voice from the chatbot.
see more information: Scarlett Johansson ‘annoyed’ by ChatGPT voice that sounded ‘strangely’ like her
The social media backlash against Altman was intense, with users accusing him of acting unethically.
This is far from the first significant battle fought against OpenAI, although Johannson’s may be the most high-profile. The company has a history of cutting corners when it comes to permissions or copyrights and then dealing with the consequences. While this technique helped OpenAI grow quickly, it also drew intense criticism.
If this is true, then it considers OpenAI – and Sam Altman – highly unethical.
Scarlett Johansson claims she was offered a deal to lend her voice to ChatGPT 4-o, but OpenAI proceeded without her consent anyway.
Sam Altman made it clear in a tweet that it was intentional. https://t.co/XM6hJselxR
-Gergely Orosz (@GergelyOrosz) May 21, 2024
Copyright-centered lawsuits
The question of whether artificial intelligence companies should be able to train their models on copyrighted material has been one of the most contentious battlegrounds during the industry’s growth. OpenAI has not even denied that it uses the method to train its models: counted the UK House of Lords that “it would be impossible to train today’s leading AI models without using copyrighted materials.”
But many creators fought back in court in an attempt to protect their work and images. Sarah Silverman accused the company of stealing her work by coaching her model from her memoir What wets the bed. George R.R. Martin and John Grisham joined a similar lawsuit, accusing the company of “large-scale systematic theft.” And New York Times filed his own lawsuit.
Johannson’s case is a little different, because the company didn’t train their model on her voice: they simply hired an actress who sounded like her. This type of dispute existed long before AI: singer Tom Waits, for example, received US$2.5 million in damages after filing a lawsuit. lawsuit against Frito-Lay in 1988, claiming that the company had hired a singer to imitate him and his distinctive raspy voice in a Doritos commercial. But OpenAI’s use of a Johannson-like sound fits into a broader pattern of the company striving to strengthen its products.
OpenAI using a synthetic version of ScarJo’s voice would be a very indicative (and very meta) example of how it considers similarity and IP in AI content.
If this is true for a celebrity with a recognizable voice and lots of cool features, how does that factor into the likeness of non-famous people? https://t.co/PvLjHaTxMo
-Marty Swant (@martyswant) May 21, 2024
Personal accusations against Sam Altman
Critics of OpenAI have also argued that Johannson’s dispute fits into a broader story of Altman acting dishonestly to get what he wants. Last year, sources told TIME that Altman had a history of being deceitful and deceitful. OpenAI CTO Mira Murati accused him of manipulating executives to get what he wanted in October, and co-founder and chief scientist Ilya Sutskever compiled a list 20 times he believed Altman misled OpenAI executives over the years. His concerns led the company’s board to briefly expel Altman from the company, but he quickly returned after receiving outpourings of support from within and outside OpenAI.
Since then, reports have surfaced of employees questioning Altman’s leadership style and accusing him of acting in psychologically abusive ways. And last week, Sutskever and executive Jan Leike left the company, with Leike tweeting that “in recent years, safety culture and processes have taken a backseat to brilliant products.”
Encouraging regulation?
While much of the OpenAI drama has been confined to Silicon Valley circles, the outcry that followed Johannson’s statement shows that public appetite for regulation of AI companies is high. A Bench Study last year found that 67% of those who are familiar with chatbots like ChatGPT expressed concern that the government will not go far enough in regulating their use. In March, Tennessee became the first state to pass legislation that combats the unauthorized representation of artificial intelligence.
In his statement, Johannson called for “the passage of appropriate legislation to help ensure that individual rights are protected.” Hollywood association SAG-AFTRA is promoting the No AI Fraud Act, a bipartisan bill introduced in January that would restrict digital likenesses without consent. Hawaii Senator Brian Schatz responded to the incident on Twitter:
It’s alarming that an AI company appears to have gone ahead and raised the voice of a real person without permission or compensation. Impunity is even more worrying for artists who are not yet popular. The right to one’s own image and voice must be protected.
-Brian Schatz (@brianschatz) May 20, 2024
This story originally appeared on Time.com read the full story