Interpretable and explainable cognitive inspired machine learning systems

PN 6-5 (II)

Project description

Our project is inspired by simulation modeling of human cognitive processes and aims to investigate deep learning methods for modeling working and long-term memory in the interplay between vision and language. We focus on modeling working memory in the form of a scene graph and long-term memory in the form of a vision and language foundation model. We aim to create systems that are interpretable and explainable and can communicate their decisions to humans via natural language, which is one of the most intuitive communication modalities of human beings. We will mainly contribute to the RQ3 within PN 6 by extending its scope to support natural language communication, targeting humans in the loop for human-machine teaming. This will make simulation and visualization pervasive (FC6). To achieve this, we will explore methods to encode and reason over graphs and develop methods for models to learn from small datasets, addressing bridging data-poor and data-rich regimes (FC3). The main research questions addressed in this project will focus on three aspects:

1. How can interpretable machine learning systems be built to identify evidence supporting their decisions?
2. How do we formulate such supporting evidence in natural language for human communication?
3. How do natural explanations, in combination with visual explanations, influence users’ acceptance and trust?

Project information

Project title Interpretable and explainable cognitive inspired machine learning systems
Project leaders Ngoc Thang Vu (Daniel Weiskopf, Mathias Niepert)
Project staff Pascal Tilli - doctoral researcher
Project duration May 2022 - November 2025
Project number PN 6-5 (II)

Publications PN 6-5 and PN 6-5 (II)

    To the top of the page