Write a Blog >>
MODELS 2020
Fri 16 - Fri 23 October 2020
Wed 21 Oct 2020 09:00 - 10:30 at Room A - Keynote by Yoshua Benjio Chair(s): Houari Sahraoui

Deep learning has been very successful at capturing what mostly corresponds to unverbalizable knowledge which humans possess, in specific domains of applications. The research described here aims at extending deep learning towards representing and reasoning with high-level semantic variables which form the basis for natural language communication and expressing algorithmic knowledge in software. These aspects of the world around us which are captured in natural language and refer to semantic high-level variables often have a causal role (referring to agents, objects, and actions or intentions). These high-level variables also seem to satisfy very peculiar characteristics which low-level data (like images or sounds) do not share, and it would be good to clarify these characteristics in the form of priors which can guide the design of machine learning systems benefitting from these assumptions. Since these priors are not just about the joint distribution between the semantic variables (e.g. it has a sparse factor graph corresponding to a modular decomposition of knowledge) but also about how the distribution changes (typically by causal interventions), this analysis may also help to build machine learning systems which can generalize better out-of-distribution. Introducing such assumptions is necessary to even start having a theory about generalizing out-of-distribution. There are also fascinating connections between these priors and what is hypothesized about conscious processing in the brain, with conscious processing allowing us to reason (i.e., perform chains of inferences about the past and the future, as well as credit assignment) at the level of these high-level variables. This involves attention mechanisms and short-term memory to form a bottleneck of information being broadcast around the brain between different parts of it, as we focus on different high-level variables and some of their interactions. The presentation summarizes a few recent results using some of these ideas for discovering causal structure and modularizing recurrent neural networks with attention mechanisms in order to obtain better out-of-distribution generalization and move deep learning towards capturing some of the functions associated with conscious processing over high-level semantic variables.

Yoshua Bengio is recognized as one of the world’s leading experts in artificial intelligence and a pioneer in deep learning notably for its neural networks rebirth. Professor at the Université de Montréal since 1993, he is also the founder and scientific director of Mila – Quebec Artificial Intelligence Institute, the world’s largest university-based research group in deep learning. In addition, he holds a Canada CIFAR AI Chair, co-directs the Learning in Machines and Brains program of the Canadian Institute for Advanced Research (CIFAR) as a Senior Fellow and acts as scientific director of IVADO. In 2018, Yoshua Bengio ranked as the computer scientist with the most new citations worldwide, thanks to his many high-impact contributions. Then, he earned the prestigious Killam Prize. At the same period, he received the ACM A.M. Turing Award, “the Nobel Prize of Computing”, jointly with Geoffrey Hinton and Yann LeCun for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. He is Fellow of both the Royal Society of London and Canada. Concerned about the social impact of AI, Yoshua Bengio actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.

Wed 21 Oct

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:30
Keynote by Yoshua BenjioKeynotes at Room A
Chair(s): Houari Sahraoui Université de Montréal
09:00
90m
Keynote
Priors for deep learning of semantic representationsKEYNOTE
Keynotes
Yoshua Bengio Université de Montréal