Nous avons le plaisir de vous inviter à assister à la conférence « Deep Neural Networks, Explanations, and Rationality » par Edward A. Lee le mardi 4 avril 2023 à partir de 10h30.
Information
Mardi 4 avril 2023 de 10h30 à 12h00 dans nos locaux, au bâtiment B612 à Toulouse, en salle 6 (6ème étage). Cet événement est accessible en présentiel uniquement.
Un accueil café sera proposé de 10h30 à 11h. La conférence débutera à 11h.
INSCRIPTION
gratuite, mais obligatoire
A propos d’Edward A. Lee
Edward Ashford Lee has been working on software systems for more than 40 years. He currently divides his time between between software systems research and studies of philosophical and societal implications of technology.
After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His software research focuses on cyber-physical systems, which integrate computing with the physical world.
He is author of several textbooks and two general-audience books, The Coevolution: The Entwined Futures and Humans and Machines (2020) and Plato and the Nerd: The Creative Partnership of Humans and Technology (2017).
A propos de la conférence
As a consequence, human decision making in complex cases mixes some rationality with a great deal of intuition, relying more on Daniel Kahneman’s « System 1 » than « System 2. » A DNN-based AI, similarly, does not arrive at a decision through a rational process in this sense.
« Rationality » is the principle that humans make decisions on the basis of step-by-step (algorithmic) reasoning using systematic rules of logic. An ideal « explanation » for a decision is a chronicle of the steps used to arrive at the decision. Herb Simon’s « bounded rationality » is the observation that the ability of a human brain to handle algorithmic complexity and data is limited.
An understanding of the mechanisms of the DNN yields little or no insight into any rational explanation for its decisions. The DNN is operating in a manner more like System 1 than System 2. Humans, however, are quite good at constructing post-facto rationalizations of their intuitive decisions.
If we demand rational explanations for AI decisions, engineers will inevitably develop AIs that are very effective at constructing such post-facto rationalizations. With their ability to handle vast amounts of data, the AIs will learn to build rationalizations using many more precedents than any human could, thereby constructing rationalizations for ANY decision that will become very hard to refute.
The demand for explanations, therefore, could backfire, resulting in effectively ceding to the AIs much more power. In this talk, I will discuss similarities and differences between human and AI decision making and will speculate on how, as a society, we might be able to proceed to leverage AIs in ways that benefit humans.