Tech

How to take a break from AI before it’s too late

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


OOnly 16 months have passed, but the launch of ChatGPT in November 2022 already feels like ancient AI history. Hundreds in billion of dollars, both public and private, are being invested in AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these great language models. Our world, and in particular the world of AI, has changed decisively.

But the real prize of human-level AI – or artificial general intelligence (AGI) – has not yet been achieved. Such an advance would mean AI capable of doing more economically productive work, interacting with others, doing science, building and maintaining social networks, conducting politics, and carrying out modern warfare. The main constraint for all of these tasks today is cognition. Removing this restriction would change the world. However, many of the world’s leading AI labs to believe This technology could be a reality before the end of this decade.

This could be a huge benefit to humanity. But AI can also be extremely dangerous, especially if we can’t control it. Uncontrolled AI could hack get into online systems that power much of the world and use them to achieve your goals. It could gain access to our social media accounts and create personalized manipulations for a large number of people. Worse still, military personnel in charge of nuclear weapons could be manipulated by an AI into sharing their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as difficult as possible for this to happen by strengthening the world’s defenses against adversarial online actors. But when AI manages to convince humans, what is already better in than us, there is no known defense.

For these reasons, many AI safety researchers at AI labs like OpenAI, Google DeepMind, and Anthropic, and at safety-conscious nonprofits, have given up trying to limit the actions that future AI can take. Instead, they are focusing on creating “Aligned” or inherently safe AI. Aligned AI could become powerful enough to wipe out humanity, but it shouldn’t to want to do this.

There are big question marks over aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a movement that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned with. If it were an academic value system, like utilitarianism, we might quickly discover that most human values ​​do not actually correspond to these values. distant ideas, after which the unstoppable superintelligence could continue to act against the will of most people forever. If the alignment was with people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions like a UN council or AI-powered systems decision aggregation algorithms are in the realm of possibility, there is concern that the absolute power of superintelligence will be concentrated in the hands of very few politicians or CEOs. Of course, this would be unacceptable – and a direct danger to – all other human beings.

See more information: The only way to deal with the AI ​​threat? Turn it off

Dismantling the time bomb

If we cannot find a way to at least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that can become uncontrollable should not be created in the first place. This solution, which delays human-level or superintelligent AI until we have resolved security concerns, has the downside that the great promises of AI – ranging from curing diseases to creating massive economic growth – will have to wait.

Pausing AI may seem like a radical idea to some, but it will be necessary if AI continues to improve without us achieving a satisfactory alignment plan. When AI capabilities reach near-acquisition levels, the only realistic option is for governments to firmly demand that labs halt development. To do otherwise would be suicide.

And pausing AI may not be as difficult as some imagine. At the moment, only a relatively small number of large companies have the means to carry out cutting-edge training, which means that the implementation of a pause is mainly limited by political will, at least in the short term. In the long term, however, improving hardware and algorithms mean that a pause may be considered difficult to enforce. Enforcement between countries would be necessary, for example through a treaty, as would enforcement within countries, with measures such as strict hardware controls.

Meanwhile, scientists need to better understand the risks. Although there are widely shared academic concern, there is still no consensus. Scientists should formalize their points of agreement and show where and why their views deviate in the new International Scientific Report on Advanced AI Security, which is expected to evolve into an “Intergovernmental Panel on Climate Change for AI Risks”. The main scientific journals should be even more open to research on existential risks, even if it seems speculative. The future doesn’t provide data, but looking to the future is as important for AI as it is for climate change.

For their part, governments have a huge role to play in how AI develops. This starts with officially acknowledging the existential risk of AI, as has already been done by US., UK., It is HUH., and configuring AI Safety Institutes. Governments must also draw up plans for what to do in the most important and imaginable scenarios, as well as how to deal with AGI’s many non-existential issues, such as mass unemployment, rampant inequality, and energy consumption. Governments should make their AGI strategies publicly available, enabling scientific, industrial and public assessments.

It is great progress that leading AI countries are constructively discussing common policies at biannual AI security summits, including a in Seoul from May 21st to 22nd. This process, however, needs to be cautious and expanded. Working on a shared truth about the existential risks of AI and expressing shared concerns with all 28 invited nations would be great progress in that direction. Furthermore, it is necessary to agree on relatively easy measures, such as creating licensing regimes, model assessments, monitoring AI hardware, expanding the liability of AI labs, and excluding copyrighted content from training. One international AI agency needs to be configured to secure execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a viable strategy. Let’s use the time we have as wisely as possible.





This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss