Visual analytics for deep learning

PN 6-4

Project description

This project addresses the research problem of visualization for machine learning (Vis4ML). We aim to open the black box of machine learning for the case of deep learning (DL), making DL models more transparent and controllable. To achieve this, we develop and evaluate visual analytics methods that allow users to better understand, improve, and control DL models. We plan to show both the features learned and make the learning and decision process more transparent and controllable by giving humans access to internal information of DL models in such a way that they can understand the model decisions and improve the final performance. A challenge is the complexity of the DL models, where direct visualization of the models is only partially possible and useful. Therefore, we leverage the power of visual analytics: relying on automatic data analysis as much as possible and using interactive visualization. For automatic analysis, we investigate both unsupervised machine learning (e.g., dimension reduction) and supervised machine learning methods (not necessarily DL). In this sense, we follow the approach of machine learning (as data analysis approach) for (interactive) visualization for machine learning (as the machine learning problem in the application areas): ML4Vis4ML. We investigate a representative number of application domains, including fluid mechanics and cognition-inspired learning related to visual attention and natural language processing.

Project information

Project title Visual analytics for deep learning
Project leaders Daniel Weiskopf (Ngoc Thang Vu)
Project duration March 2019 - August 2022
Project number PN 6-4

Publications PN 6-4 and PN 6-4 (II)

  1. 2023

    1. T. Munz-Körner, S. Künzel, and D. Weiskopf, “Supplemental Material for ‘Visual-Explainable AI: The Use Case of Language Models.’” 2023. doi: 10.18419/darus-3456.
    2. N. Schäfer et al., “Visual Analysis of Scene-Graph-Based Visual Question Answering,” in Proceedings of the 16th International Symposium on Visual Information Communication and Interaction, in Proceedings of the 16th International Symposium on Visual Information Communication and Interaction. Guangzhou, China: Association for Computing Machinery, Oct. 2023, pp. 1–8. doi: 10.1145/3615522.3615547.
  2. 2021

    1. T. Munz, D. Väth, P. Kuznecov, T. Vu, and D. Weiskopf, “Visual-Interactive Neural Machine Translation,” in Graphics Interface 2021, in Graphics Interface 2021. 2021. [Online]. Available: https://openreview.net/forum?id=DQHaCvN9xd
    2. R. Garcia, T. Munz, and D. Weiskopf, “Visual analytics tool for the interpretation of hidden states in recurrent neural networks,” Visual Computing for Industry, Biomedicine, and Art, vol. 4, no. 24, Art. no. 24, Sep. 2021, doi: 10.1186/s42492-021-00090-0.
  3. 2020

    1. T. Munz, N. Schaefer, T. Blascheck, K. Kurzhals, E. Zhang, and D. Weiskopf, “Demo of a Visual Gaze Analysis System for Virtual Board Games,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, 2020. doi: 10.1145/3379157.3391985.
    2. F. Heyen et al., “ClaVis: An Interactive Visual Comparison System for Classifiers,” in Proceedings of the International Conference on Advanced Visual Interfaces, in Proceedings of the International Conference on Advanced Visual Interfaces. Salerno, Italy: Association for Computing Machinery, 2020. doi: 10.1145/3399715.3399814.
    3. T. Munz, N. Schäfer, T. Blascheck, K. Kurzhals, E. Zhang, and D. Weiskopf, “Comparative Visual Gaze Analysis for Virtual Board Games,” The 13th International Symposium on Visual Information Communication and Interaction (VINCI 2020), 2020, doi: 10.1145/3430036.3430038.
  4. 2019

    1. T. Munz, M. Burch, T. van Benthem, Y. Poels, F. Beck, and D. Weiskopf, “Overlap-Free Drawing of Generalized Pythagoras Trees for Hierarchy Visualization,” in 2019 IEEE Visualization Conference (VIS), in 2019 IEEE Visualization Conference (VIS). Oct. 2019, pp. 251–255. doi: 10.1109/VISUAL.2019.8933606.
    2. T. Munz, L. L. Chuang, S. Pannasch, and D. Weiskopf, “VisME: Visual microsaccades explorer,” Journal of Eye Movement Research, vol. 12, no. 6, Art. no. 6, Dec. 2019, doi: 10.16910/jemr.12.6.5.

Daniel Weiskopf

Prof. Dr.

Visualization

To the top of the page