News

Flawless ‘Deepfakes’ You Can’t Detect and ‘Manipulator’ Robots That Trick You – AI’s Most Terrifying Discoveries Revealed

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


ARTIFICIAL intelligence systems have as much capacity to cause harm as they do good.

While many are excited about technology’s potential to improve productivity and make life easier, the risk is as great as the reward.

6

As artificial intelligence technology becomes increasingly widespread, experts are speaking openly about its potential risks.Credit: Getty

What happens when an AI tool is designed for one purpose and finds another, more sinister application?

What happens when technology is used to trick us, or, even worse, when the computer itself tries to trick us?

Here are just a few advances with big implications.

Voice Replication Software

Microsoft has developed an artificial intelligence tool that can imitate human speech with frightening accuracy.

The technology giant claims that the VALL-E 2 is the first of its kind to achieve “human parity”, that is, quality equal to or comparable to a real voice.

And for this reason Microsoft refuses to share the system with the public.

“We currently have no plans to incorporate VALL-E 2 into a product or expand access to the public,” said the company. he wrote on your website.

“This can lead to potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker.”

Just this week, developer DeepTrust AI released a tool aptly called TerifAI.

After listening to just one minute of human speech, the tool can imitate the speaker’s speaking style and clone their voice.

Microsoft AI can now clone voices to sound perfectly “human” in seconds – but it’s too dangerous to release them to the public

While TerifAI is a tongue-in-cheek social experiment, it’s a compelling example of why voice phishing is so dangerous.

This year alone, cybersecurity experts have noticed an increase in the use of AI tools by malicious actors, including those who replicate speech.

And the extent of the problem goes beyond “vishing”, or voice phishing, in which scammers pretend to be relatives and friends on the phone.

Voice cloning systems have even been used in a capacity that impacts national security.

In January, for example, a robocall circulated using President Joe Biden’s voice urging Democrats not to vote in the New Hampshire primary.

The man behind the conspiracy was arrested and indicted on charges of voter suppression and impersonating a candidate.

Microsoft refused to share VALL-E 2, a text-to-speech system that imitates human voices, for fear of "potential misuse"

6

Microsoft refused to share VALL-E 2, a text-to-speech system that imitates human voices, over fears of “potential misuse”Credit: AFP

Deepfakes

In the same vein as systems that replicate voices, deepfakes portray a person doing or saying something they did not do.

But while Microsoft has big aspirations for VALL-E 2, saying the tool could find a place in education and accessibility characteristicsdeepfakes are intended to be deceptive.

The term “deepfake” was coined in 2017 and was originally used to describe photos manipulated by open-source facial swap technology.

Users of a Reddit forum superimposed celebrities’ faces onto other people’s bodies to create exploitative pornography.

Deepfake technology depicts someone doing or saying something completely false and is only becoming more convincing thanks to developments in AI

6

Deepfake technology depicts someone doing or saying something completely false and is only becoming more convincing thanks to developments in AICredit: Getty

And the emergence of dedicated AI tools has made creating deepfakes even more accessible.

A wave of manipulated images on X, formerly Twitter, led the platform to temporarily ban searches for Taylor Swift’s name in January.

The US Department of Homeland Security still observed the emerging threat of convincing deepfakes in a 2019 report, writing that it came “not from the technology used to create it, but from people’s natural inclination to believe what they see.”

As a result, the report continued, “deepfakes and synthetic media do not need to be particularly advanced or believable to be
effective in spreading misinformation/disinformation.”

Just like voice replication software, deepfake technology has been used to influence policy.

Shortly after the invasion of Russia Ukraine In March 2022, a video began circulating on social media.

It represented President Volodymyr Zelenskyy urging the military to lay down their arms and surrender.

A deepfake of President Volodymyr Zelenskyy showed him urging the Ukrainian military to lay down their arms and surrender to the Russian invaders

6

A deepfake of President Volodymyr Zelenskyy showed him urging the Ukrainian military to lay down their arms and surrender to the Russian invadersCredit: AFP

Zelenskyy’s office dismissed its authenticity as soon as the video began to gain traction.

Although Facebook users seemed unconvinced, pointing out the pixelation around his face and inconsistent skin tone, the platform scrambled to take him down.

Nathaniel Gliecher, FacebookThe company’s head of security policy said the video originated from “a compromised website.”

“We have reviewed and removed this video for violating our policy against misleading manipulated media and have notified our colleagues on other platforms,” Gliecher wrote in a statement.

Means of communication Ukraine 24 hours later, they said hackers had embedded the video on their website.

Regardless of how convincing it was, the video served as the first high-profile example of a deepfake being used in armed conflict — and it has dark implications like technology continues to advance.

The US Department of Homeland Security noted that the danger of deepfake technology arises "people's natural inclination to believe what they see."

6

The US Department of Homeland Security noted that the danger of deepfake technology comes “from people’s natural inclination to believe what they see.”Credit: Getty

Deceptive AI

Surely we have all heard the argument that AI does not have higher thoughts and simply learns from the information it receives.

But some experts argue that the technology may introduce some emergent behaviors. This includes learning to lie.

An article published on May 10 in the magazine Standards found evidence that AI technologies can systematically acquire manipulation and deception skills.

A team of researchers from the Massachusetts Institute of Technology discovered that many systems are already capable of deceiving humans

They analyzed dozens of studies on how AI systems spread misinformation through a process known as “learned deception.”

A team of researchers from the Massachusetts Institute of Technology found evidence that two AI systems developed by Meta could trick humans in a game

6

A team of researchers from the Massachusetts Institute of Technology found evidence that two AI systems developed by Meta could trick humans in a gameCredit: Getty

One example was CICERO, an AI system developed by Meta to play a war-themed strategic board game.

Although goal trained CICERO not to cheat on human players, the researchers considered the system an “expert liar.”

They found that CICERO betrayed his comrades and carried out acts of “premeditated deception,” such as forming pre-planned alliances that left players open to enemy attacks.

The researchers also found evidence of learned deception in another of the goal AI system called Pluribus, a poker bot that can bluff players and convince them to fold.

Instead of commenting on the conclusions, a goal the spokesperson dismissed them as simply a “research project.”

“Meta regularly shares the results of our research to validate them and allow others to responsibly develop our advances,” the company said in a statement to the media.

“We have no plans to use this research or its learnings in our products.”

What are the arguments against AI?

Artificial intelligence is a highly controversial issue and it seems like everyone has a position on it. Here are some common arguments against this:

Job Loss – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist that the argument is ethical, since generative AI tools are being trained on their work and would not work otherwise.

Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being performed.

Privacy – Content from personal social media accounts can be fed into language models to train them. Concerns have emerged as Meta unveils its AI assistants on platforms like Facebook and Instagram. There have been legal challenges to this issue: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation – As AI tools extract information from the Internet, they may take things out of context or experience hallucinations that produce absurd responses. Tools like Copilot on Bing and Google’s generative search AI are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing erroneous health information.

Autonomous weapons

Experts in AI and robotics have warned of the development of autonomous weapons, which select and attack targets without human intervention.

The use of artificial intelligence it was considered the “third revolution in warfare”, after gunpowder and nuclear weapons.

And this is happening faster than you might think. Commercially available drones can use AI image recognition to locate and destroy targets.

There is also more advanced technology in development, including AI-powered submarines and tanks.

MSubs, a British technology company, has secured a £15.4 million contract from the UK Royal Navy in 2022 to build an autonomous submarine under the name “Project Cetus”.

The fully unmanned machine will be capable of operating up to 3,000 miles from home for three months.

A 2016 open letter, with more than 30,000 signatures from industry stakeholders, made an argument against the use of AI-powered weapons.

“The key question for humanity today is whether to initiate a global AI weapons strategy race or to prevent it from starting,” the letter said.

“If any major military power moves forward with the development of AI weapons, a global weaponry race is virtually inevitable, and the end point of this technological trajectory is obvious.”

One of the most pressing concerns is that AI-powered weapons could fall under the control of hackers, including those with political motivations.

Optimists about the technology fear that the use of autonomous weapons could tarnish the face of AI.

“In short, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so,” the letter concluded.

“Starting an AI Military Weapon race is a bad idea and should be prevented by banning offensive autonomous weapons beyond meaningful human control.”



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss