Interview conducted in 2021.
FShould we be afraid of AI?
LJ: Absolutely not! It is just a tool that helps us perform certain tasks. It’s like the hammer, which was meant to nail nails, until one day someone used it to hit the guy next to the head. No need to be afraid of the hammer – just a regulation is needed to specify its uses. Moreover, the term artificial intelligence is misleading: it has been used since 1956 to summarize the ambition of researchers at the time, but it does not in any way reflect reality.
artificial intelligence is still a very powerful tool, which raises questions, starting with respect for privacy … What is your opinion?
LJ: It is essential that we citizens are educated and informed about what AI does. Knowing that we can control ourselves through facial recognition, as in China … Then it’s up to us to decide, knowingly, what private information we choose or not to share and with whom. I, for example, agree to have my fingerprints registered to access my gym, because it makes my life easier. Scientists, citizens and governments must come together to define rules designed not to go too far, just like what was done for the control of nuclear weapons. At risk, of course, of over-regulating and slowing down innovation. You have to find the right balance.
AI is also a formidable weapon of misinformation …
LJ: Talk about it deepfakes on social media? Fake videos are fun, but that’s a low-level AI. You have to learn to doubt what you see, to develop your critical spirit. Also, it is not new: the falsification of videos, which has existed since the invention of cinema. It is worth remembering that it is not the algorithm that makes social media, but the people who feed it.
Should we be afraid of killer robots?
LJ: In fact, I can create a robot that shoots people. And it’s not even very complicated. But again, it is up to us, at the societal level, to decide whether or not to accept it. The community must regulate the machines.
>> Read also: Robot soldiers, the temptation of a license to kill
HASBeyond the rules, these machines are not infallible: autonomous cars have accidents, for example …
LJ: The truly autonomous car will probably never exist, because it’s impossible to adapt to all situations. When we humans have an accident, our reaction is simple: we try to save ourselves. And very often we do not have time to react and it is chance that decides. Unimaginable for the car! In case of an accident, we will need explanations. However, AI is not transparent and its decisions are difficult to understand. But it is not impossible, contrary to what we sometimes hear: we know very well how these systems work and what variables influence their decisions. It’s just that there are so many interactions that it takes a lot of effort. The search for solutions is ongoing …
>> Read also: Autonomous car: a new simulation tool to differentiate driver behavior
Should we also fear the impact of AI on employment?
LJ: Obviously there are trades that will disappear … and new ones will appear. However, those who disappear will often be hard-working or discouraging. Most of the time, it will be better for us: AI will help us, relieve us and leave us more time to dedicate to others, personal activities, hobbies …
artificial intelligence could one day surpass us and escape us?
LJ: You have to understand that contrary to what some people say – usually people who don’t know the subject well – AI doesn’t do anything on its own: its actions can never overwhelm us. Let’s be realistic: what we know how to do today is completely stupid. Certainly, the machine can beat us in areas for which it has been trained: chess, the game of go … It can even find winning moves that had never been considered. But this is not intelligence: she did not invent it, she can only calculate thousands of times per second. In the end, it does little better than Pascalina, the calculating machine that Pascal had developed in the 17th century. To create intelligence comparable to ours would require the development of a myriad of artificial intelligences. Assuming we know how to do it, the main problem would not be that we take control.
What would be the main danger with AI?
LJ: The real danger is the power consumption of this technology. We are committed to a broad trend toward dematerialization, accelerated by artificial intelligence. But we don’t realize that at the current pace, we’re going straight to the wall. To play go, Deep-Mind, Google’s AI, consumes about 440,000 W per hour, the equivalent of a small data center. The human brain only consumes 20 W / h … and does many other things besides playing! The digital economy, with internet, networking, data storage and blockchain technologies, already accounts for almost 20% of global electricity consumption. Although today only 50% of humanity has access to the Internet.
For future development to be sustainable, we need to move quickly massive data a small data . Instead of centralizing everything in a big way data centers which consume half of their energy to ensure their cooling, we must try to decentralize them, consume less energy, and produce it locally. And change the way we design algorithms, so that they always manage to do more while consuming less. In the meantime, decisions may need to be made. Run AI to detect cancers and save lives, okay. But an explosion of energy to play go or StarCraft , No. AI must be at the service of man and relieve him in his daily life. It’s not that she locks him in a virtual universe or mortgages his future.
>> Read also: Streaming, bitcoin, AI … energy delirium!