Abstract: Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and automatic game playing. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways. Such behaviour and the difficultly of interpreting the behaviour of neural networks is a serious hindrance in the deployment of these models for safety-critical applications. In this talk, I will review the challenges in developing models that are robust and explainable and discuss the opportunities for collaboration between the formal methods and machine learning communities.
Speaker: Pushmeet Kohli, Head of Research, Science Team Research Lead, Secure Machine Learning, Google DeepMind
Bio: Pushmeet Kohli is the lead scientist for the AI for science and Robust Machine Learning groups at DeepMind. His research revolves around Intelligent Systems and Computational Sciences. His current research interests include Safe and Verified AI, Probabilistic Programming, Interpretable and Verifiable Knowledge Representations of Deep Models. In terms of application domains, he is interested in goal-directed conversation agents, machine learning systems for healthcare, and 3D reconstruction and tracking for augmented and virtual reality. Pushmeet’s papers has appeared in conferences in the field of machine learning, computer vision, game theory and human computer interaction and have won awards in CVPR, WWW, CHI, ECCV, ISMAR, TVX and ICVGIP. His research has also been the subject of a number of articles in popular media outlets such as Forbes, Wired, BBC, New Scientist and MIT Technology Review. Pushmeet is on the editorial board or PAMI, JMLR, IJCV and CVIU.