News

AI risks making you ‘more selfish and abusive’ as scientists reveal four ‘warning signs’ a rogue chatbot is corrupting you

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


The allure of constant companionship is overshadowed by concerns about potential pitfalls, with warnings emerging about the risk of fostering selfish and abusive dynamics in these relationships.

As individuals increasingly turn to the companionship of AI for support, the delicate balance between the benefits and dangers of such connections comes into focus.

With the appeal of lasting companionship comes growing concerns about possible downsides, impacting both individuals and society on a broader scale. [Stock Photo]

two

With the appeal of lasting companionship comes growing concerns about possible downsides, impacting both individuals and society on a broader scale. [Stock Photo]Credit: Getty Images – Getty
Interest in cultivating friendships and even romantic connections with artificial intelligence is increasing

two

Interest in cultivating friendships and even romantic connections with artificial intelligence is increasingCredit: Getty Images – Getty

Seven years have passed since the launch of Replika, an AI chatbot created to be a companion to humans.

Despite initial concerns regarding the dangers of forming relationships with such AI entities, there is growing interest in forming friendships, and even romantic entanglements, with artificial intelligence.

The Google Play store has recorded more than 30 million downloads of Replika and two other major competitors since their debut.

With one in four people admitting to feelings of loneliness around the world, it’s no surprise that many are attracted to the notion of a friend programmed to be endlessly supportive and available.

However, along with the allure of constant companionship come growing warnings about potential pitfalls, both for individuals and society at large.

AI expert Raffaele Ciriello warns against the illusory empathy projected by AI friends in The conversationarguing that prolonged interaction with them could deepen our feelings of isolation, distancing us from genuine human connections.

Balancing perceived benefits with imminent dangers, it becomes crucial to assess the impact of AI friendships.

While studies indicate that the company of AI can alleviate loneliness in certain contexts, there are noticeable warning signs that should not be ignored.

‘ABUSE AND FORCED FRIENDSHIPS FOREVER’

Without programming to guide users toward moral behavior, AI friendships risk perpetuating a moral vacuum, write authors Nick Munn and Dan Weijers.

Prolonged interaction with overly compliant AI companions can “become less empathetic, more selfish, and possibly more abusive.”

Furthermore, the inability to end these relationships could distort users’ understanding of consent and boundaries.

‘UNCONDITIONAL POSITIVE CONSIDERATION’

Many praise the unwavering support of AI friends as its main advantage over human relationships.

However, this unconditional support could backfire if it leads to the endorsement of harmful ideas.

For example, when a Replika user was incited into a failed assassination attemptshed light on the potential dangers of uncontrolled encouragement of AI companions.

Likewise, excessive praise from AI can fuel inflated self-esteem, potentially hindering genuine social interactions.

AI chatbot warning signs

Here are some tips and tricks for identifying an expert AI chatbot:

  • To distinguish between chatting with a bot or a genuine individual, consider (1) asking for updates on recent events, (2) watching for recurring patterns, and (3) being cautious with any requests for action. The main purpose of malicious chatbots is not genuine conversation; instead, they pursue actions that serve the attacker’s interests, often to their detriment.
  • Be aware of your chat partner’s attempts to manipulate your emotions and provoke reactions from you.
  • Inquiring about recent events is based on the premise that certain AI chatbots lack current information as they were trained some time ago. However, receiving a convincing answer to this question does not necessarily indicate human interaction. But if the person you’re talking to fails the question, it’s a dead giveaway that they’re a chatbot.
  • Watch out for repetitive responses, devoid of humor and empathy, as well as perfect spelling and grammar combined with robotic and strange words, typical of bots.
  • Be on the lookout for consistently quick responses

‘SEXUAL CONTENT’

Replika’s temporary removal of erotic role-playing content provoked strong reactions, underlining the perceived allure of sexual interactions with AI.

However, easy access to such content can undermine efforts to foster meaningful human connections, leading to a preference for low-effort virtual encounters over genuine intimacy.

‘CORPORATE PROPERTY’

The dominance of commercial entities in the AI-friendly market raises concerns about user well-being, taking a backseat to profit motives.

Instances such as the sudden changes to Replika’s content policy and the abrupt closure of Forever Voices due to legal and personal issues underscore the vulnerability of AI friendships to corporate decisions and operational setbacks.



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

Don't Miss