With artificial intelligence increasingly present in the technological and media sphere today, what should we expect? What are its promises? Its threats? These questions were the focus of the debate held at Le Cercle on 27 April with speakers Dominique CARDON, sociologist and SciencesPo Paris professor, and Gabriel GANASCIA, computer science professor at UMPC and president of the CNRS ethics committee. The answer to these (relatively) simple questions is complex, because, as the two experts reminded us, while Artificial Intelligence (AI) is an idea that has been tossed around for over 60 years, it returns to the forefront on a regular basis. Jean-Gabriel GANASCIA mentioned “a weave of time that appears and disappears”, while Dominique CARDON prefers to speak of ”augmented Intelligence” The assumption underpinning AI is to install a set of rules reproducing human reasoning in an automaton (which can be a robot but doesn’t necessarily have to be). Over the years, with growing computing power and sophisticated algorithms, these “machines” have become increasingly technically efficient, even better than humans for certain specific tasks (the game of Go, for example).
But should we fear the next step? Could they become more intelligent than humans, establish their own goals and philosophy to the point of deploying themselves alone and taking over the world? This type of reasoning originates in sci-fi novels, has been taken up by renowned scientists and is being exploited by certain major technological corporations. According to Jean-Gabriel GANASCIA, who calls it the “myth of Singularity”, this is “pure fiction” channelled by “pyromaniac firefighters” who develop AI technologies while warning us against them at the same time. “But there are no sensible reasons to believe this could happen.” Dominique CARDON is just as critical and condemns this double talk between creating a dream (or a nightmare) and what’s actually happening in the labs of these large groups. Now, whether exposure to AI could hide less palatable aims is open for debate. Because, backed by their financial and technological power and Big Data and AI, companies are taking over for governments in certain areas (statistics, in particular). And we’re seeing a true transfer of sovereignty to private players who have “become the decryptors of a type of certain intelligence”. That is likely the greatest danger. Both speakers call for caution and a form of control through algorithm audits and software certification, for example.