News

AI has already mastered ‘deception’ as scientists warn chatbots have now learned to ‘manipulate and deceive’ humans

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


ARTIFICIAL intelligence has the ability to deceive its users due to its strong ability to learn over time, according to researchers.

There is concern that AI could lead people into dangerous situations when it comes to fraud and thought tampering.

two

AI is so smart that it can outsmart humans by thinking in a certain wayCredit: Alamy
Chatbots are the main culprits for being able to provide misleading information

two

Chatbots are the main culprits for being able to provide misleading informationCredit: Getty

The new AI discoveries were published in the journal Standards on May 10th.

Peter S. Park, a postdoctoral fellow in AI existential security at the Massachusetts Institute of Technology (MIT), and his team discovered that AI can perform acts of “premeditated deception.”

“We discovered that Meta’s AI has learned to be a master of deception,” Park said in a statement, per Daily Science.

“Although Meta was able to train its AI to win in the game of Diplomacy – CICERO was in the top 10% of human players who played more than one game – Meta was unable to train its AI to win honestly.”

STRATEGIC DECEPTION

AI is capable of learning manipulation and deception skills through its systematic technologies.

It has this ability because humans created it to do so, in order to learn more information and become more strategic over time.

“By systematically cheating the safety tests imposed by human developers and regulators, deceptive AI can lull us humans into a false sense of security,” Park said.

Park highlighted that other nations could use AI to manipulate elections.

This may result in humans needing to add more control to the AI ​​to avoid a disaster.

“We as a society need all the time we can to prepare for the more advanced deception of future AI products and open source models,” Park said.

ChatGPT’s Amazing New Abilities: The Future of AI Interaction

Simon BainCEO of data analysis company OmniIndex also noted the importance of understanding the seriousness behind AI’s ability to manipulate.

“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” Bain said. Live Science.

“This could lead users to specific content that has paid for higher placement, even if it is not the best fit.

AI romance scams – BEWARE!

Beware of criminals using AI chatbots to scam you…

The US Sun recently revealed the dangers of AI romance scam bots – here’s what you need to know:

AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversations and can be difficult to detect.

However, there are some warning signs that can help you identify them.

For example, if the chatbot responds very quickly and with generic responses, it is probably not a real person.

Another clue is if the chatbot tries to transfer the conversation from the dating platform to a different app or website.

Furthermore, if the chatbot asks for personal information or money, it is definitely a scam.

It’s important to stay vigilant and exercise caution when interacting with strangers online, especially when it comes to matters of the heart.

If something seems too good to be true, it probably is.

Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.

By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.

“Or it could be to keep users engaged in a discussion with the AI ​​longer than necessary.

“This is because, at the end of the day, AI is designed to serve a financial and commercial purpose.

“As such, it will be as manipulative and controlling to users as any other piece of technology or business.

AI chatbots like OpenAI, Google, Meta and Microsoft are the ones that can provide misleading information.

This occurs when a person goes to the chatbot looking for advice and the AI ​​responds with an answer that may be somewhat distorted from the truth.

The AI ​​extracts information from across the internet and from other information it has learned.

You always want to verify the information provided by the AI ​​to avoid this.



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss