News

Warns UK politicians about the risk of audio deepfakes that could disrupt the general election | Politics News

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


As AI deepfakes wreak havoc during other elections, experts warn that UK politicians must be prepared.

“Just tell me what you had for breakfast,” says Mike Narouei of ControlAI, recording on his laptop. I talk for about 15 seconds, about my toast, coffee and trip to their offices.

Within seconds, I hear my own voice saying something entirely different.

Keep up with the latest updates on the election

In this case, the words I wrote: “Deepfakes can be extremely realistic and have the ability to disrupt our politics and undermine our trust in the democratic process.”

Tamara Cohen's voice being turned into a deepfake
Image:
Tamara Cohen’s voice being turned into a deepfake

We used free software, it didn’t require any advanced technical skills and the whole thing took almost nothing.

This is an audio deepfake – video ones require more effort to produce – and, in addition to being deployed by scammers of all types, there is deep concern, in a year in which around two billion people go to polls, in the US, India and dozens of other countries, including the UK, about their impact on elections.

More about Artificial Intelligence

Sir Keir Starmer was the victim of one at last year’s Labor Party conference, allegedly of him swearing at the team. It was quickly discovered to be false. The identity of whoever did it was never discovered.

London Mayor Sadiq Khan was also targeted this year, with fake audio of him making inflammatory comments about Remembrance weekend and calling for pro-Palestine marches that would go viral at a tense time for communities. He claimed new laws were needed to stop them.

Ciaran Martin, former director of the UK’s National Cyber ​​Security Centre, told Sky News that expensive video fakes can be less effective and easier to debunk than audio.

“I’m particularly concerned now about audio, because audio deepfakes are spectacularly easy to make, disturbingly easy,” he said. “And if they are deployed intelligently, they can have an impact.”

Those that were most harmful, in your opinion, are a deepfake audio of President Bidensent to voters during the New Hampshire primary in January of this year.

A “robocall” with the president’s voice told voters to stay home and “save” their votes for the November presidential election. A political consultant later took responsibility and was charged and fined $6 million (£4.7 million).

See more information:
The digital election in India
Time is running out for regulators to tackle the threat of AI
Biden to Unveil Comprehensive AI Regulations

Ciaran Martin, the former director of the NCSC
Image:
Ciaran Martin, the former director of the NCSC

Martin, now a professor at Oxford University’s Blavatnik School of Government, said: “It was a very credible imitation of his voice and anecdotal evidence suggests that some people were fooled by it.

“Also because it wasn’t an email that they could forward to someone else to look at, or on TV where many people were watching. It was a call to their home that they more or less had to judge on their own.

“Targeted audio in particular is probably the biggest threat right now, and there is no blanket solution, there is no button that you can just press and solve this problem if you are prepared to pay for it or pass the right laws.

“What you need, and the US did this very well in 2020, is a series of responsible, knowledgeable eyes and ears in different parts of the electoral system to limit and mitigate the damage.”

He says there is a risk of exaggerating the threat of deepfakes when they have not yet caused mass electoral damage.

A fake Ukrainian TV broadcast made in Russia, he said, showing a Ukrainian official claiming responsibility for a terrorist attack in Moscow, was simply “not believed”, despite being expensively produced.

The UK government approved a National Security Law with new crimes of foreign interference in the country’s democratic processes.

The Online Safety Act requires technology companies to take down such content, and meetings are regularly held with social media companies during the pre-election period.

Democracy advocates are concerned that deepfakes could be used not only by hostile foreign actors or lone individuals looking to disrupt the process, but also by political parties themselves.

Polly Curtis is chief executive of thinktank Demos, which has called on parties to agree a set of guidelines for the use of AI.

Polly Curtis, chief executive of Demos
Image:
Polly Curtis, chief executive of Demos

She said: “The risk is that there are foreign actors, political parties, ordinary people on the streets creating content and just stirring the pot of what is true and what is not.

“We want them to come together and agree on how they will use these tools in the elections. We want them to agree not to create AI or amplify it and label it when used.

“This technology is so new, and there are so many elections going on, that there could be a major disinformation event in an election campaign that starts to affect people’s confidence in the information they have.”

Deepfakes have already been the target of important elections.

Last year, just hours before polls closed in Slovakia’s presidential elections, a fake audio of one of the candidates claiming to have rigged the elections went viral. He was heavily defeated and his pro-Russian opponent won.

The UK government created a Joint Election Security Preparedness Unit earlier this year – with Whitehall officials working with police and security agencies – to respond to threats as they emerge.

Follow Sky News on WhatsApp
Follow Sky News on WhatsApp

Keep up with the latest news from the UK and around the world by following Sky News

Touch here

A UK government spokesperson said: “Security is paramount and we are well prepared to ensure the integrity of elections with robust systems in place to protect against any potential interference.

“The National Security Law contains tools to combat false election threats, and social media platforms must also take proactive measures against state-sponsored content that seeks to interfere in elections.”

A Labor spokesman said: “Our democracy is strong and we cannot and will not allow any attempts to undermine the integrity of our elections.

“However, the rapid pace of AI technology means that the government must now always stay one step ahead of malign actors who want to use deepfakes and disinformation to undermine trust in our democratic system.

“Workers will be relentless in combating these threats.”



This story originally appeared on News.sky.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,127

Don't Miss

NFL Schedule Reaction: 10 Most Important Fantasy Matchups of 2024

The release of the NFL schedule, much to Andy Behrens’

‘Nothing but problems’, shouts customer after bank issues overdraft – but account was closed for a year

A FIRST Farmers bank customer has vented her frustrations online