News

Celebrities are most at risk of AI ‘voice cloning’, experts say – tips to avoid falling for ‘vishing’ scams

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


CELEBRITIES are at greater risk of falling victim to impersonation attempts, thanks to the growing popularity of AI voice cloning software.

However, the lack of A-lister status is not enough to protect him.

A survey of 1,000 Americans ranked public figures most at risk of AI voice cloning — and Gen Z respondents had a strong opinion of Donald Trump

5

A survey of 1,000 Americans ranked public figures most at risk of AI voice cloning — and Gen Z respondents had a strong opinion of Donald Trump

Cybercriminals will find audio clips online and insert them into commercially available software to produce words and even complete sentences in someone’s voice.

This process is known as voice cloning and the result is commonly called audio deepfake.

The term “deepfake” was coined in 2017 to describe illicit images and videos that featured celebrities’ faces superimposed on other bodies.

And it looks like the rich and famous are in danger once again.

ONE new study of Podcastle, an AI-powered podcasting platform, surveyed 1,000 Americans to get their opinion about the celebrities most at risk of voice cloning.

Respondents believed that Arnold Schwarzenegger was at greater risk because he had “the most direct voice to respond.”

A staggering 86% of respondents believe the former California governor’s “distinctive and instantly recognizable accent” puts him at risk.

Schwarzenegger was followed by Donald Trump, Kim Kardashian, Sylvester Stallone and Christopher Walken.

Nearly one in four (23%) reported that Kardashian has a “consistent pitch and tone,” which makes her voice easy to replicate.

Meanwhile, 39% said Trump’s voice is easy to replicate due to its familiarity from frequent media appearances.

Terrifying AI Test Reveals Most Americans Can’t Identify Fake Voices When Questioned – Try Your Luck Against Mysterious ‘Deepfake’

Gen Z respondents considered Trump to be at greater risk, and their opinion was likely shaped by the record political turmoil in the media landscape.

Celebrities and politicians have emerged as the most common victims of deepfakes on social media.

A wave of manipulated images on X, formerly Twitter, led the platform to temporarily ban searches for Taylor Swift’s name in January.

And last week, Elon Musk posted a deepfake video of alleged Democratic presidential candidate Kamala Harris on X.

Former California governor Arnold Schwarzenegger came in first place, with survey participants crediting "Distinctive, instantly recognizable accent"

5

Former California governor Arnold Schwarzenegger came in first place, with survey participants crediting “distinctive and instantly recognizable accent”Credit: Getty

Deepfakes are not a new phenomenon. The US Department of Homeland Security recognized in a 2019 report, claiming that the risk came not from technology “but from people’s natural inclination to believe what they see.”

As a result, the report continued, “deepfakes and synthetic media do not need to be particularly advanced or believable to be
effective in spreading misinformation/disinformation.”

Although study respondents were not asked to comment on the potential misuse of AI voice cloning technologycompany leaders expressed apprehension.

Podcastle CEO Artavazd Yeritsyan told The US Sun that he was well aware of the use of AI voice cloning technology by malicious actors.

Audio deepfakes are commonly used to portray celebrities saying something they didn't say, but the public is at risk for a different reason

5

Audio deepfakes are commonly used to portray celebrities saying something they didn’t say, but the public is at risk for a different reason

“Any technology whatever you present, there will always be people who use it for bad things and people who use it for good things,” Yeritsyan said.

Users can record and edit audio without leaving the Podcastle platform. This includes using AI to generate words or phrases that have not been recorded.

Yeritsyan says the platform’s goal is to “automate” the production process, rather than “replacing a human being.”

The platform also has checks in place to prevent the creation of audio deepfakes.

A user must record specific phrases to confirm that a real person is speaking, rather than a cybercriminal inserting clips of another person’s voice into the system.

“Then this content is securely stored and encrypted so that no one else can access your voice,” explained Yeritsyan.

CEO Artavazd Yeritsyan believes voice cloning technology can play a role in accessibility and translation despite the dangers

5

CEO Artavazd Yeritsyan believes voice cloning technology can play a role in accessibility and translation despite the dangers

While they’re optimistic about possible future applications like text-to-speech accessibility functions, Podcastle’s top reps are clearly aware of the risks.

“I think the biggest threats are phishing motives, where a criminal asks for bank account information using the voice of a relative or friend,” Yeritsyan said, describing a phenomenon known as voice phishing.

All a cybercriminal needs is a few seconds of audio – commonly found on social media – to create a deepfake, which is then weaponized to trick unsuspecting victims into handing over their personal information over the phone.

Cybersecurity experts refer to the phenomenon as “voice phishing” or “vishing.”

Cybercriminals rely on AI voice cloning technology to target people in "voice phishing" attacks, posing as friends or relatives of the victim to gain their trust

5

Cybercriminals rely on AI voice cloning technology to target people in “voice phishing” attacks, posing as the victim’s friends or relatives to gain their trustCredit: Getty

Successful defense against this emerging form of cyberattack begins with understanding the signs of fraud.

Criminals often ask their victims to act urgently to correct fraudulent charges or confirm personal information. A forceful approach should raise red flags.

You should always be careful as a caller ID may not be enough to verify identity.

Security experts recommend hanging up and calling the organization or individual directly if you receive a call that you suspect is fraudulent.

As a general tip, avoid giving out sensitive details such as passwords, credit card numbers or bank account information over the phone.

How are scammers finding my number?

Here, Mackenzie Tatananni, science and technology reporter for The US Sun, explains how a scammer can get your information.

Scammers often obtain phone numbers through data breaches, which occur when a hacker accesses a private database – usually those maintained by companies such as contractors and employers.

This information can be shared and disseminated online, including on the dark web, where there are forums dedicated to sharing leaked information.

Another common technique called wardialing employs an automated system that targets specific area codes.

A recorded message will instruct the listener to enter sensitive information such as card number and PIN.

There’s also a much more distressing possibility: your phone number could be listed online without your knowledge.

Data brokers are eager to buy and sell your information. These companies collect information from a variety of public online sources, including social media and public records.

Its main purpose is to build databases of people and use this information for personalized advertising and marketing.

Much of this information ends up on public records websites, which display information like phone number, email, home address and date of birth for anyone to see.

In the United States, these sites are legally required to remove your information if you request them.

Locate your profile and follow the cancellation instructions, but be warned: these sites don’t make things easy and aim to frustrate you as you complete the deregistration process.

To put it simply, you can also use a tool to clean your information from the Internet.

Norton offers one of these services. Called Privacy Monitor Assistant, the tool finds information online and requests removal on your behalf.

It’s also possible that your phone number is linked to a social media account and is publicly displayed on your profile – this happens quite often on Facebook.

Be sure to review your privacy settings and confirm that this information is hidden from prying eyes.

Podcastle representatives predict that voice cloning technology will be increasingly used to increase productivity and automate tedious processes.

However, they understand that much of the responsibility for stopping bad actors falls on them.

“We want to get to a stage where we just don’t give people the ability to use it for bad reasons,” Yeritsyan explained.

“I think most products should be regulated so that this kind of thing doesn’t happen.”



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

Google Gemini Voice Chat Mode Is Here

August 13, 2024
Google is launching a new voice chat mode for Gemini called Gemini Live, the company announced at its Pixel 9 event today. Available to Gemini Advanced subscribers, it
1 2 3 9,595

Don't Miss