Tech

Can machines with artificial intelligence acquire consciousness?

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Note: María I. Cobos is a postdoctoral researcher at the University of Granada, Spain. Ana B. Chica is a professor at the University of Granada, in Spain. The text was originally published on the website The Conversation.

The development of artificial intelligence raises a question of great practical and ethical importance: Can machines acquire consciousness? To answer this question, we first need to understand what we mean by “consciousness”.

This term implies being aware of what is happening around us, in our organism or our actions, which allows us to behave in a flexible and controlled way.

O Conscious behavior can be distinguished by two featuresor resources:

  • the first characteristic (R1) refers to the global availability of information. Although some parts of the brain are highly specialized (visual, motor and memory areas), a requirement of consciousness is that information is available globally. That is, if we see something, we can say what the color, shape, sound or how it is perceived is;
  • the second characteristic (R2) allows the self-monitoring of this processing. It evaluates whether the answer was right or wrong, which allows us to correct that answer immediately or in future situations. This is known as “metacognition”.

Let’s, for example, prepare a potato omelet. In this recipe, we need to select the ingredients, pick them up and use them at the right time (first characteristic).

During preparation, we have to taste the food in intermediate stages to evaluate the flavor in order to adapt these characteristics to current preferences. If, for example, we want to reduce the amount of salt, we will use less salt, but we will control it so that the final result is satisfactory (second characteristic).

Both characteristics are considered prerequisites for consciousness. If one of them is missing, the processing will be unconscious. We could add salt without realizing it, as it is something we do automatically.

This processing is efficient, does not consume attention or memory resources, but it is limited. We cannot assess whether we have added the right amount and sometimes this even makes us doubt whether we have added it or not.

One of the great limitations of unconscious processing is that “we are not aware of what we are not aware of”. In other words, we cannot estimate what our unconscious processing is like or evaluate it.

How do we assess consciousness?

The first characteristic (global availability) was observed in living beings without language. From the first months of life, human babies are able to understand rules and respond to stimuli that do not follow a previously established sequence.

Animals such as crows and primates can respond, if trained, with yes or no responses to stimuli that are very difficult to detect.

The second characteristic (metacognition) refers to our ability to self-evaluate processing.

When we consciously perceive or respond, we can estimate the probability that our perception or response is correct. This can be assessed in animals by measuring how much they persist in the initial choice (we will persist more the more certain we are) or by allowing the option of not responding (in situations of less certainty, we will choose this option of not responding more often).

Can machines have these characteristics?

Some researchers propose that both features could be implemented in machines, so that they would act as if they were conscious.

Imagine that you are now a robot that needs to prepare a potato omelet. If it could measure the blood pressure of whoever is going to eat it, this information could be made available to the entire system (first feature) to cook with less salt if blood pressure is high.

At the same time, if blood pressure is excessively high, this system can send an alarm to the person’s phone so that they can schedule a doctor’s appointment (first feature).

In addition to making information accessible, it would be interesting for the robot to evaluate its own behavior — for example, whether adding onion to the omelet resulted in a pleasant flavor — and continually update itself (second feature).

According to this position, consciousness could be reduced to a set of calculations that could be implemented in machines.

What this approach does not take into account is that, in biological organisms, consciousness arises not only from the brain’s interaction with the environment, but also from the brain’s interaction with the organism itself.

Hunger, for example, generates a series of physiological reactions that the brain interprets as a sensation, emotion or feeling. These interpretations are an essential part of the consciousness of living beings, which have developed over millions of years of evolution and allow survival.

When faced with danger, the heart beats faster, which helps us escape the situation, but also generates fear.

Recent studies have found that the heartbeat is slower when it is consciously perceived than when it is not consciously perceived. This indicates that being conscious involves not only monitoring the environment, but also monitoring the very signals that our body sends to better adapt, learn and adapt our behavior to the changing demands of the environment.

These interactions between the brain and the organism are essential for generating subjective experiences in the first person (“I saw”). Understanding consciousness in humans involves understanding not only how we respond to the environment, but also how information from the central (brain) and peripheral (body) nervous system is integrated to create the subjective experience of perception.

Consciousness still very distant

Current scientific evidence shows that, for consciousness to occur, a system is needed that is capable of processing information, selecting part of it to make it available globally (first resource) and that evaluates, learns and rectifies based on experience (second resource).

The calculations currently carried out by machines do not meet these characteristics and, in addition, they lack a mind and a living organism capable of constructing sensory representations of both the environment and the internal state of its own organism (the hardware, in the case of machines) .

This lack of internal monitoring between the brain and the organism limits the possibility of machines developing consciousness as we currently understand it. However, science must remain attentive to the rapid progress of technology, monitoring its advances and anticipating ethical dilemmas that may arise.

This article is republished from The Conversation. Read the original article.

Meet the new generation of humanoid robots that should go to space



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,162

Don't Miss