Time: | July 28, 2021, 2:00 p.m. (CEST) |
---|---|
https://unistuttgart.webex.com/unistuttgart/j.php?MTID=mc4d6de7ac852f7314784a7fc49268987 | |
Download as iCal: |
|
With this ML Session series we intend to provide individual and independent lecture sessions on ML related topics.This time Zeynep Akata will talk about "Explainable Visual Recognition with Minimal Supervision".
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Explainable Machine Learning combining vision and language where we show (1) how to learn simple and compositional representations of images focusing on discriminating properties of the visible object, jointly predicting a class label, explaining why/not the predicted label is chosen for the image, (2) how to evaluate the effectiveness of these explanations on the zero-shot learning task, and (3) how to improve the explainability of deep models via conversations.
Zeynep Akata is a professor of Computer Science (W3) within the Cluster of Excellence Machine Learning at the University of Tübingen. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the Max Planck Institute for Informatics with Prof Bernt Schiele (2014-17) and at University of California Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a young scientist honour from the Werner-von-Siemens-Ring foundation in 2019 and an ERC Starting Grant from the European Commission in 2019. Her research interests include multimodal learning and explainable AI.