News

‘We are being fooled by AI’, say panicked ChatGPT users who warn that excessive use of chatbots risks a bleak future for humanity

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


AI USERS are sounding the alarm after concluding that humans are facilitating a “takeover” of machine learning.

“We are being deceived by AI and we don’t even know it,” professed one internet user in one Reddit post Tuesday.

two

Some chatbot users are concerned about the potential overuse of artificial intelligence tools, with one user claiming that humans are “being fooled by AI”Credit: Getty

“We are no longer consuming content the way it should be consumed. We’re letting some AI decide what’s important to us.”

In addition to denouncing text summarizers, the user also attacked generative AI.

“Content creators are now also using AI to get their stuff out there. So now we have AI creating content on one side, and AI summarizing it on the other,” he professed.

“Where the hell do we fit into this picture? We’re becoming the best intermediaries, but in our own conversation. It’s like we’re playing telephone, but both ends of the line are robots, and we’re just passing the message along.”

Some users pushed back on the original poster’s claims, including the claim that most people are using AI — or that its use is prevalent enough to warrant concern.

Others argued that the tools are perfectly suited to serve users and only pose a threat when they deviate from their intended purpose.

“If I’m using AI to summarize articles, it probably means I’m looking for something,” wrote one Redditor. “Where AI gets nasty is when it pretends to be another human.”

One argument that went unchecked was the potential for data privacy breaches.

Artificial intelligence – including tools that summarize articles – learns from enormous amounts of data scraped from the Internet.

Much of the appeal of chatbots like ChatGPT depends on their ability to replicate the unique patterns of human speech.

US Social Model VERTICAL – Mackenzie Tatanann – Microsoft VALL-E 2 is a text-to-speech generator that can replicate human speech with frightening accuracy

To do this, the models they must first train themselves in real conversation.

Meta is just one example of a company that is training AI models in information taken from social networks.

Suspicions arose in May that the company had changed its security policies in anticipation of the backlash it would receive for scraping content from billions of Instagram and Facebook users.

As the controversy grew, the company insisted that it was not training the AI ​​on private messages, only the content that users chose to make public, and never included accounts from users under 18.

And what happens when humans are no longer needed to facilitate machine learning?

Models like OpenAI's GPT-4o mine vast amounts of content from the Internet to mimic patterns found in human writing and conversation

two

Models like OpenAI’s GPT-4o mine vast amounts of content from the Internet to mimic patterns found in human writing and conversationCredit: Getty

A phenomenon known as MAD, or model autophagy disorder, shows how AI can learn from AI-generated content.

A machine can use its own outputs as a data set or the outputs of others models.

Researchers at Rice and Stanford University were among the first to identify a decline in the quality and diversity of responses without a constant flow of new, real data.

MAD poses a problem as more and more AI-generated content floods the web. It is increasingly likely that this material is being extracted and used in training datasets.

What are the arguments against AI?

Artificial intelligence is a highly controversial issue and it seems like everyone has a position on it. Here are some common arguments against this:

Job Loss – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist that the argument is ethical, since generative AI tools are being trained on their work and would not work otherwise.

Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being performed.

Privacy – Content from personal social media accounts can be fed into language models to train them. Concerns have emerged as Meta unveils its AI assistants on platforms like Facebook and Instagram. There have been legal challenges to this issue: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation – As AI tools extract information from the Internet, they may take things out of context or experience hallucinations that produce absurd responses. Tools like Copilot on Bing and Google’s generative search AI are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing erroneous health information.

NewsGuard, a platform that assesses the credibility of news sites, has been tracking “AI-enabled misinformation” online.

By the end of 2023, the group had identified 614 AI-generated untrustworthy news and information sites. Last week, the number had swollen to 987.

The sites have generic names to appear to be legitimate news sites. Some contain misinformation about politics and current events, while others make up celebrity deaths.

One Reddit user aptly summarized the speech.

“Used correctly, AI can be an incredible editing tool,” he wrote. “But many people are lazy and try to keep the old technology cycle going and use it as a single content creator and editor, all in one.”



This story originally appeared on The-sun.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss