The rise of artificial intelligence has progressed more quickly than might be expected, leading to fears that the technology could overtake humans.
Because of this, some scientists are studying the development of artificial superintelligence that is not subject to the limits of human learning speed.
However, others question whether AI has caused a setback in the development of other civilizations, frustrating their chances of long-term survival.
A study by Michael Garrett of the University of Manchester suggests that AI could be a “great filter” of the universe.
Garrett argues that technology creates a limit so difficult to overcome that it “prevents most life from evolving into space civilizations.”
This may be why the search for extraterrestrial intelligence (Seti) has not yet found any proof of advanced societies in the galaxy.
The great filter theory has been proposed as the answer to the Fermi Paradox – the inconsistency between the lack of concrete evidence of advanced extraterrestrial life and the supposed high probability of its existence.
Simply put, if the universe is vast and old enough to support billions of possibly habitable planets, how come we haven’t found any signs of alien life?
The theory suggests that there are major blockages in the evolutionary timeline of civilizations that make them unable to travel to space.
“I believe the emergence of ASI could be a filter,” Garrett wrote.
“The rapid advancement of AI, potentially leading to ASI, may intersect with a critical phase in the development of a civilization.”
As AI is progressing faster than intended, it could affect our ability to control it or sustainably explore and populate the Solar System.
However, the problems that arise with AI are how it is able to consistently improve at a faster rate than humans can comprehend.
“The potential for something to go wrong is enormous, leading to the downfall of biological and AI civilizations before they have the opportunity to become multiplanetary,” Garrett wrote.
“For example, if nations increasingly trust and cede power to autonomous AI systems that compete with each other, military capabilities could be used to kill and destroy on an unprecedented scale.”
Garrett hypothesizes that this could lead to the destruction of our civilization, including the AI systems themselves.
In this scenario, Garrett predicts that the longevity of a technological civilization could be less than 100 years.
“This is approximately the time between the ability to receive and transmit signals between stars (1960) and the estimated emergence of ASI (2040) on Earth.”
WAKE UP CALL
Garrett’s research, published in Acta Astronautica, was not intended to be a cautionary tale, but to serve as a wake-up call for creating regulations regarding the development of AI.
“It’s not just about preventing the malevolent use of AI on Earth; it’s also about ensuring that the evolution of AI aligns with the long-term survival of our species,” he wrote.
“This suggests that we need to invest more resources into becoming a multi-planetary society as quickly as possible.”
This was the goal in the early days of NASA’s Apollo program, but advances have since been made by private companies.
Prominent leaders in this field have called for a moratorium on AI development due to the implications of autonomous decision-making.
“But even if all countries agreed to abide by strict rules and regulations, it will be difficult to control rogue organizations.”
The integration of AI into military defense systems is also a cause for concern, as Garrett suggests there is already evidence that humans will cede power to systems increasingly capable of performing useful tasks at a faster rate.
Because of this, governments tend to be reluctant to regulate AI, given the strategic advancements it can represent.
“This means we are already dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international challenges. law” wrote Garrett.
“In such a world, handing over power to AI systems to gain a tactical advantage could inadvertently trigger a chain of highly destructive and rapidly escalating events. In the blink of an eye, the collective intelligence of our planet could be destroyed.”
This story originally appeared on The-sun.com read the full story