News

AI Expert Warns That Deepfakes Are Now the “Biggest Evolving Threat” and Reveals Two Ways They Are Being Used Against You

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


DEEPFAKES are now the “biggest evolving threat” when it comes to cybercrime.

That’s what a leading cyber expert told The US Sun, in a stark warning about the dangers of face-spoofing technology.

Convincing deepfakes can be created very quickly – and require less and less technical knowledge

1

Convincing deepfakes can be created very quickly – and require less and less technical knowledgeCredit: Getty

Deepfakes are fraudulent videos that appear to show a person doing (and possibly saying) things they have never done.

He uses artificial intelligencestyle software to clone the characteristics of a person – and map them onto something else.

Of course, AI is being used for many sinister purposes – including, in general, making scams quicker to create and execute – but deepfakes are one of the most serious threats.

The US Sun spoke with Adam Pilton, a UK-based CyberSmart cybersecurity consultant and former detective sergeant investigating cyber crimeabout the threats we face.

“AI can generate highly convincing phishing emails with ease and that means unskilled cybercriminals are making money while the sun shines,” Adam told us.

“The National Cybersecurity Center warned us in its latest annual report that cybercriminals are already using AI to develop increasingly sophisticated phishing and scam emails.

“The threat will continue to grow as technology develops and the skills of those involved increase.”

“Without a doubt, the biggest evolving threat is deepfakes,” he continued.

“Deepfake technology can create realistic impersonations of people in video and audio.”

AI can now make scary videos of people using just ONE photo – but Microsoft won’t launch tool for fear of impersonation

There are two main ways criminals use deepfakes, Adam explained.

SCAM SCHEMES

The first sinister use of deepfakes is to trick you into making some kind of security mistake.

This could be as simple as a criminal using a deepfake to pretend to be a loved one – and convincing them to hand over some money.

Deepfakes – what are they and how do they work?

Here’s what you need to know…

  • Deepfakes are fake videos of people that look perfectly real
  • They are made using computers to generate convincing representations of events that never happened
  • Often this involves swapping one person’s face with another’s or making them say whatever you want.
  • The process begins by feeding an AI with hundreds or even thousands of photos of the victim
  • A machine learning algorithm swaps certain parts frame by frame until it generates a realistic but fake photo or video
  • In a famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former president called Donald Trump an “imbecile.”
  • In another, Will Smith’s face is pasted onto the character Neo in the action film The Matrix. Smith turned down the role to star in the failed film Wild Wild West, while the Matrix role went to Keanu Reeves

Or they may impersonate a colleague or even your boss to get money or information from you.

“Scammers use deepfakes to create fake, convincing videos or audio messages to manipulate victims into taking actions they normally wouldn’t take, and deepfake scams are already being used successfully in social engineering attacks,” Adam told us .

“Earlier this year, we saw cybercriminals use CFO impersonation to trick an employee into transferring $25 million to them.

“This was initiated by a phishing email, which the employee was skeptical about.

The voices of the alleged attackers could also be heard, aggressively instructing the victim’s loved one to do as they were told before the kidnapper spoke directly to the victim and demanded ransom.

Adam Piltoncybersecurity consultant at CyberSmart

“However, when the employee participated in a virtual meeting and saw and heard the CFO as well as other people he recognized, all suspicions disappeared.

“We are also seeing phone calls being used to create highly emotional responses in which loved ones speak to us, claiming they have been kidnapped.

“In late 2023, there was an apparent increase in reports of such calls in the US.

“It wasn’t just the familiar voices that created the emotional response.

“The voices of the alleged attackers could also be heard, aggressively instructing the victim’s loved one to do as they were told before the kidnapper spoke directly to the victim and made the ransom demand.”

DEFENSE AGAINST DEEPFAKES

Here’s what Sean Keachhead of technology and science at The Sun and The US Sun, has this to say…

The rise of deepfakes is one of the most worrying trends in online security.

Deepfake technology can create videos of you even from a single photo – so almost no one is safe.

But although it seems a little desperate, the rapid rise of deepfakes has some advantages.

For starters, there is much greater awareness about deepfakes now.

Therefore, people will look for signs that a video could be faked.

Likewise, technology companies are investing time and money in software that can detect fake AI content.

This means social media will be able to flag false content for you with greater confidence – and more frequently.

As the quality of deepfakes increases, you will likely have difficulty spotting visual errors – especially a few years from now.

So your best defense is your common sense: apply thorough scrutiny to everything you watch online.

Ask if the video is something that would make sense for someone to fake – and who benefits from you seeing that clip?

If you’ve been told something alarming, a person is saying something that seems strange, or you’re being led into a hasty action, there’s a chance you’re watching a fraudulent clip.

BAD NEWS

The second way deepfakes are being used for nefarious purposes is to spread fake news.

This is particularly worrying as voters head to the polls for the upcoming elections in the US and UK.

Simple question that will confuse AI voice clones

“The World Economic Forum has ranked misinformation and disinformation as the biggest global risk over the next two years,” Adam told The US Sun.

“With a series of elections approaching for democracies around the world, it is easy to understand why.

“We continue to see AI’s ability to generate fake news articles, social media posts, and other content that spread misinformation.”



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,126

Don't Miss