Tech

Ray Kurzweil: The Promise and Peril of AI

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


IIn early 2023, after an international conference that included dialogue with China, the United States released a “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. However, the very notion of “human control” is more nebulous than it might seem. If humans authorized a future AI system to “stop a nuclear attack,” what discretion should it have over how to do so? The challenge is that an AI general sufficient to successfully thwart such an attack can also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even for already deployed systems. For example, the same drone that delivers medicine to a hospital that is inaccessible by road during a rainy season could later transport an explosive to that same hospital. Keep in mind that military operations have been using drones so precise that they can send a missile through a specific window that is literally on the other side of the earth from their operators for over a decade.

We also have to think about whether we really want our side to observe a ban on lethal autonomous weapons (LAW) if hostile military forces do not. What if an enemy nation sent a contingent of advanced AI-controlled war machines to threaten your security? Don’t you wish your side had an even smarter ability to defeat them and keep you safe? This is the main reason why the “Campaign to End Killer Robots” has failed to gain much traction. As of 2024, all major military powers have refused to support the campaign, with the notable exception of China, which did so in 2018, but later clarified that it supported the ban only on use, not development – ​​although even this is probably more for strategic purposes. and political rather than moral reasons, since the autonomous weapons used by the United States and its allies could harm Beijing militarily.

Furthermore, what will “human” mean in the context of control when, from the 2030s onwards, we introduce a non-biological addition to our own decision-making with brain-computer interfaces? This non-biological component will only grow exponentially, while our biological intelligence will remain the same. And as we reach the end of the 2030s, our own thinking will be largely non-biological. Where will human decision-making be when our own thoughts largely utilize non-biological systems?

Courtesy of Penguin Random House LLC.

Instead of pinning our hopes on the shaky distinction between humans and AI, we should focus on how to make AI systems safe and aligned with the well-being of humanity. In 2017, I attended the Asilomar Conference on Beneficial AI – a conference inspired by the successful biotechnology safety guidelines established at the 1975 Asilomar Conference on Recombinant DNA – to discuss how the world could safely utilize artificial intelligence. What resulted from the negotiations were the Asilomar AI Principles, some of which have already had great influence on AI labs and governments. For example, principle 7 (Transparency in the event of failure: “If an AI system causes harm, it must be possible to determine why”) and principle 8 (Judicial transparency: “Any involvement of an autonomous system in judicial decision-making must provide a satisfactory explanation auditable by a competent human authority”) are closely reflected in both the voluntary commitments from the leading tech giant in July 2023, and in President Biden’s executive order several months later.

Efforts to make AI decisions more understandable are important, but the basic problem is that regardless of whatever explanation they provide, we simply won’t have the ability to fully understand most decisions made by future superintelligent AI. If a Go-playing program, for example, far beyond the best human being, was capable of explaining its strategic decisions, not even the best player in the world (without the help of a cybernetic enhancement) would not fully understand them. One promising line of research aimed at reducing the risks of opaque AI systems is “extracting latent knowledge.” This project is trying to develop techniques that can ensure that if we ask an AI a question, it will give us all the relevant information it knows, rather than just telling us what it thinks we want to hear – which will be an increasing risk as as machines advance. -learning systems become more powerful.

The Asilomar principles also laudably promote non-competitive dynamics around AI development, namely principle 18 (AI arms race: “An arms race in lethal autonomous weapons must be avoided”) and principle 23 (Common good: “Superintelligence it should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity, and not of a state or organization.”). However, because superintelligent AI can be a decisive advantage in warfare and bring enormous economic benefits, military powers will have strong incentives to engage in an arms race over it. This not only exacerbates the risks of misuse, but also increases the chances that security precautions around AI alignment may be overlooked.

See more information: Don’t fear artificial intelligence

It is very difficult to usefully constrain the development of any fundamental AI capability, especially since the basic idea behind general intelligence is so broad. However, there are encouraging signs that leading governments are now taking the challenge seriously. Following the AI ​​Safety Summit 2023 in the UK, the Bletchley Declaration by 28 countries have committed to prioritizing the safe development of AI. And already in 2024, the European Union passed the milestone EU law on AI regulating high-risk systems, and the United Nations adopted a historic resolution “to promote safe, secure and trustworthy artificial intelligence.” Much will depend on how such initiatives are actually implemented. Any initial regulation will inevitably make mistakes. The key issue is how quickly policymakers can learn and adapt.

One promising argument, which is based on the free market principle, is that every step towards superintelligence is subject to market acceptance. In other words, artificial general intelligence will be created by humans to solve real human problems, and there are strong incentives to optimize it for beneficial purposes. Given that AI is emerging from a deeply integrated economic infrastructure, it will reflect our values, because, in an important sense, it will to be us. We are already a human-machine civilization. Ultimately, the most important approach we can take to keep AI safe is to protect and improve our human governance and social institutions. The best way to avoid destructive conflicts in the future is to continue advancing our ethical ideals, which have already profoundly reduced violence in recent centuries and decades.

AI is the foundational technology that will enable us to address the pressing challenges we face, including overcoming disease, poverty, environmental degradation and all of our human frailties. We have a moral imperative to realize the promise of these new technologies while mitigating the danger. But it won’t be the first time we’ve managed to do so.

When I was a child, most people around me assumed that nuclear war was almost inevitable. The fact that our species has found the wisdom to refrain from using these terrible weapons shines as an example of how we have within our reach the responsible use of emerging biotechnology, nanotechnology and superintelligent AI. We are not doomed to failure to control these dangers.

Overall, we should be cautiously optimistic. While AI is creating new technical threats, it will also radically improve our ability to deal with those threats. As for abuse, since these methods will improve our intelligence regardless of our values, they can be used for both promises and dangers. We must therefore work towards a world where the powers of AI are widely distributed, so that its effects reflect the values ​​of humanity as a whole.

Adapted from The Singularity Is Closer: When We Merge With AI by Ray Kurzweil, published by Viking. Copyright © 2024 by Ray Kurzweil. Reprinted courtesy of Penguin Random House.



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 5,955

Don't Miss

Why You Can Trust Jaylen Waddle in 2024 | Yahoo Fantasy Football Show

Yahoo Sports fantasy analyst Matt Harmon and Fantasy Points’ Ryan

Shows to watch after the bear

OOn June 27, FX on Hulu will release the entire