Tech

Why protesters are demanding a pause on AI development

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


J.just a week before the world second global summit on artificial intelligence, protesters from a small but growing movement called “Pause AI” demanded that world governments regulate AI companies and freeze the development of new cutting-edge artificial intelligence models. They say development of these models should only continue if companies agree to let them move forward. carefully evaluated to test your security first. Protests took place in thirteen different countries including the US, UK, Brazil, Germany, Australia and Norway on Monday.

In London, a group of around 20 protesters stood outside the UK Department of Science, Innovation and Technology shouting things like “stop the race, it’s not safe” and “who is the future?” our future” in hopes of attracting the attention of policymakers. Protesters say their goal is to get governments to regulate companies developing frontier AI models, including OpenAI’s Chat GPT. They say companies are not taking enough precautions to ensure their AI models are safe enough to release into the world.

“[AI companies] they have proven time and time again… through the way the workers at these companies are treated, with the way they treat other people’s work, literally stealing it and throwing it at their models, they have proven that they cannot be trusted,” said Gideon Futerman, a Oxford graduate student who gave a speech at the protest.

One protester, Tara Steele, a freelance writer who works on blogging and SEO content, said she has seen technology impact her own livelihood. “I noticed that since ChatGPT launched, the demand for freelance work has drastically decreased,” she says. “I personally love writing… I really loved it. And it’s kind of sad, emotionally.”

see more information: Pausing AI development is not enough. We need to turn everything off

She says the main reason to protest is because she fears there could be even more dangerous consequences coming from frontier artificial intelligence models in the future. “We have a range of highly qualified experts, Turing Award winners, highly cited AI researchers and the CEOs of AI companies themselves. [saying that AI could be extremely dangerous].” (The Turing Prize is an annual award given to computer scientists for outstanding contributions to the subject, and is sometimes called the “Nobel Prize” of computing.)

She is especially concerned about the growing number of experts who warn that improperly controlled AI could lead to catastrophic consequences. A report commissioned by the US government, published in March, warned that “the rise of advanced AI and AGI [artificial general intelligence] it has the potential to destabilize global security in a way reminiscent of the introduction of nuclear weapons.” Today, the biggest AI labs are trying to build systems capable of outperforming humans at almost every task, including long-term planning and critical thinking. If successful, increasing aspects of human activity could become automated, from mundane things like online shopping to the introduction of autonomous weapons systems that could act in ways we cannot predict. This could lead to an “arms race” that increases the likelihood of “global attacks and weapons of mass destruction.” [weapons of mass destruction]large-scale fatal accidents, interstate conflicts and escalation”, According to the report.

See more information: Exclusive: US must act “decisively” to prevent “extinction-level” threats from AI, says government-commissioned report

Experts still don’t understand the inner workings of AI systems like Chat GPT and fear that, in more sophisticated systems, our lack of knowledge could lead us to miscalculate how more powerful systems would act. Depending on how AI systems are integrated into human life, they could wreak havoc and gain control of dangerous weapons systems, leading many experts to worry about the possibility of human extinction. “These warnings are not reaching the general public and they need to know,” she says.

As of now, machine learning experts are somewhat divided on how risky further development of artificial intelligence technology is. Two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the human brain’s decision-making process, Geoffrey Hinton and Yoshua Bengio, have publicly stated that they believe there is a risk that the technology could lead to human extinction.

see more information: Eric Schmidt and Yoshua Bengio debate how much AI should scare us

The third godfather, Yann LeCun, who is also Meta’s Chief AI Scientist, strongly disagrees with the other two. He told Wired in December that “AI will bring many benefits to the world. But people are exploiting fear around technology and we run the risk of scaring them away.”

Anthony Bailey, another Pause AI protester, said that while he understands there are benefits that could come from new AI systems, he worries that technology companies will be incentivized to build technologies that humans could easily lose control over, because these technologies also have immense potential. for-profit. “This is the economically valuable material. If people are not dissuaded that it is dangerous, these are the types of modules that will naturally be built.”



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss