What if conscious artificial intelligence took control of humanity?

⁇ [VIDÉO] You may also like this partner content (after the ad)

Imagine evolving into a world controlled by conscious robots, where the least of your actions would be evaluated and notified. It could be an episode of Black Mirror, as a reality in the near future. In this scenario on the border between fiction and science, what areas of our lives would be affected and in what way? According to a UNESCO text, artificial intelligence could ” to widen the gaps and inequalities in the world if it is not controlled. If widespread conscious artificial intelligence realized its “slave” role and no longer accepted it, what would happen?

What is consciousness? The answer is not so obvious, as it is ” one of the most difficult words to define according to the philosopher André Comte-Sponville. It would mean the ability to know one’s own reality and to judge it (self-criticism), for example to make value judgments about one’s own actions. It would therefore be accompanied by a moral (“good” or “bad”) and a body.

Last year, neuroscientists defined three types of consciousness, following the different types of information processing calculations in the brain: invariant unconscious recognition (C0), selection of information for global dissemination, thus flexibly available for calculation and presentation (C1) and self-control of these calculations, leading to a subjective sense of certainty or error (C2). The latter typology — corresponding to strong artificial intelligence — seems to correspond more to what we would expect from a “conscious” robot.

According to these researchers, Today’s machines still mainly implement calculations that reflect unconscious processing (C0) in the human brain. »And we are still a long way from C2. Many scientists agree that it will even be impossible and that computers will never be able to feel emotions like us. Sure, it’s possible to endow machines with a form of “feeling,” but that’s just a simulation.

Other scientists believe that another (artificial) form of consciousness could be predicted by studying the architectures that allow the human brain to generate consciousness and then transfer that knowledge to computer algorithms. Even if it seems very far from our present, what would happen to humanity then?

Transmit the appropriate values ​​to the robots

For such a scenario to occur, the human should have already decided it at the base. Thus, giving a robot a “good” or “bad” consciousness depends primarily on its creator. The risk is, for example, that the machine discriminates against groups of individuals because it has been programmed to do so. Computer scientist Mo Gawdat writes in his book that the challenge is to teach the right values ​​and ethics to robots. ” Artificial intelligence (AI) will take this seed and create a tree that will offer an abundance of that same seed. If we use love and compassion [sur les réseaux sociaux par exemple], AI will also use these principles. We are like the parents of a prodigious child: one day he will be independent. Our role is to make sure you have the right tools “, Concludes the author.

However, there is a risk of drift and loss of control over the machines. One could imagine the revolt of humanoid robots if “they” realized their alienation from man and tried to change things. Marvin Minsky, an American computer scientist who co-created the first forms of artificial intelligence, told Life Magazine in 1970: Once computers take over, there is likely to be no going back. We will only survive because they will. We can consider ourselves lucky if they keep us as pets “.

“Increasing the capabilities of AI-based technologies increases its potential for criminal exploitation”

Depending on society’s choices, AI could become a weapon against individual freedoms and serve social control. The impacts would be colossal in all aspects of society: substitute jobs, controlled education, vulnerable environment, and so on. “Bad” AI could lead to misinformation and even increase crime. This is what Lewis Griffin, a computer researcher at University College London, thinks: With the expansion of the capabilities of AI-based technologies, it increases its potential for criminal exploitation. “.

The English investigator and his team compiled a list of twenty crimes possibly caused by AI and classified them by order of concern (low, medium and high) and in terms of harm to victims, criminal benefit, possibility of criminal achievement and difficulty quitting. Entitled “Artificial Intelligence and Future Crime”, the participatory workshop brought together representatives from academia, police, defense, government and the private sector.

Global classification of the infractions derived from the workshop. For each crime, the colored bars indicate the average classification of the four dimensions: harm to the victims (in yellow), criminal benefit (green), possibility of criminal commission (red) and difficulty in stopping (blue). Bars above (or below) the line indicate that the offense is more (or less) worrisome in this dimension. Error bars indicate the interquartile range between groups. Crimes in the same column should be considered of comparable concern. Concern increases with the left-to-right column. © Caldwell, Andrews et al. 2020

Of the six most threatening categories, five have a broad social impact, such as the misuse of autonomous vehicle technology for terrorist attacks or to cause accidents, and those involving fake AI-generated content. In this case, false information can usurp a person’s identity to request private access, or even ruin a known person’s reputation. The ” deepfakes they are very difficult to detect and combat and therefore quite dangerous. Moreover, the fake content written by AI would certainly confuse the minds of humans to distinguish the real from the fake.

Investigators then shed light on threats of “medium” severity, such as financial market manipulation, cyberattacks, data corruption, fraud or control of weapons for criminal purposes. As frightening as it may be, the latter threat is classified as such (average) because it is difficult to enforce because the military equipment is well protected.

For the same reason, “robot thieves” are among the least serious threats, as they can be easily stopped. In addition, counterfeiting would consist of the manufacture and sale of false cultural content (music, paintings, etc.), but seriousness is not considered important.

Ethical issues for a more or less near future

The main risk of a conscious AI would be to step out of our ethical and legal framework and lose control of it. Closer to the present, on November 24, 2021, the first global text of UNESCO was adopted, covering all areas related to AI, its benefits and risks to society.

The recommendation describes the values ​​and principles that should guide policy and legal action in the development of AI. These include respect for human rights and inclusion (non-discrimination, gender equality, etc.); the contribution to sustainable development in the research and use of AI; AI security (risk assessment, data protection, ban on the use of AI for social classification purposes or mass surveillance).

Advances in algorithms represented by cognitive computing are driving the continued penetration of AI in areas such as education, commerce, and medical treatment, to create space for AI services. “Chinese researchers write.” As for the human concern of who controls who between humanity and intelligent machines, the answer is that AI can only become a service provider for humans, which demonstrates the rationality of the value of AI. . “.

While most scientists agree with this, others are concerned about how big the development of AI could be. ” Already today, AI systems detect when a human is trying to change their behavior and sometimes do everything to reject this intervention and circumvent it if it conflicts with the initial goal of AI. “, warns him The weather Rachid Guerraoui, Director of the Distributed Programming Laboratory at EPFL (Switzerland). ” You just have to be more discriminating with the help you render toward other people. And then erase the traces of human intervention “Caution, then, even if the takeover of the machine on the human is not for tomorrow.

Leave a Comment