Data-integrated simulation of human perception and cognition

PN 7-5

Project description

The ”Digital Human Model” one of the three visionary examples of the EXC, is key for future applications, e.g. in personalised healthcare. Previous efforts within EXC SimTech have achieved significant advances towards realising this vision by developing data-integrated simulation methods for biomechanics and systems biology. However, humans are much more than their biomechanical system and the vision can only be fully achieved if central processes of human perception and cognition are also taken into account and integrated into a holistic digital human model. The goal of this project is to fill this gap and to simulate human cognition by combining cognitive architectures with data-driven learning methods. More specifically, this project will investigate data-integrated methods to simulate the complex interplay between human visual attention, working and long-term memory, and interactive behaviour, with and without inclusion of physiological data, such as human gaze. We will demonstrate and empirically evaluate the newly developed methods in an interactive pervasive simulation (augmented reality) application.

Project information

Project title Data-integrated simulation of human perception and cognition
Project leaders Andreas Bulling (Ngoc Thang Vu)
Project duration June 2019 - November 2022
Project number PN 7-5

Publications PN 7-5

  1. 2024

    1. F. Zermiani, P. Dhar, E. Sood, F. Kögel, A. Bulling, and M. Wirzberger, “InteRead: An Eye Tracking Dataset of Interrupted Reading,” in Proc. 31st Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), in Proc. 31st Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING). 2024, pp. 9154--9169. [Online]. Available: https://aclanthology.org/2024.lrec-main.802/
  2. 2023

    1. E. Sood, L. Shi, M. Bortoletto, Y. Wang, P. Müller, and A. Bulling, “Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention,” in Proc. the 45th Annual Meeting of the Cognitive Science Society (CogSci), in Proc. the 45th Annual Meeting of the Cognitive Science Society (CogSci). Jul. 2023, pp. 3639--3646.
    2. P. Elagroudy et al., “Impact of Privacy Protection Methods of Lifelogs on Remembered Memories,” in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), in Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 2023, pp. 1--10. doi: 10.1145/3544548.3581565.
    3. F. Strohm, E. Sood, D. Thomas, M. Bâce, and A. Bulling, “Facial Composite Generation with Iterative Human Feedback,” in Proc. The 1st Gaze Meets ML workshop, PMLR, I. Lourentzou, J. Wu, S. Kashyap, A. Karargyris, L. A. Celi, B. Kawas, and S. Talathi, Eds., in Proc. The 1st Gaze Meets ML workshop, PMLR, vol. 210. PMLR, 2023, pp. 165--183. [Online]. Available: https://proceedings.mlr.press/v210/strohm23a.html
  3. 2022

    1. A. Abdou, E. Sood, P. Müller, and A. Bulling, “Gaze-enhanced Crossmodal Embeddings for Emotion Recognition,” in Proc. International Symposium on Eye Tracking Research and Applications (ETRA), in Proc. International Symposium on Eye Tracking Research and Applications (ETRA), vol. 6. 2022, pp. 1--18. doi: 10.1145/3530879.
    2. A. Abdessaied, E. Sood, and A. Bulling, “Video Language Co-Attention with Multimodal Fast-Learning Feature Fusion for VideoQA,” in Proceedings of the 7th Workshop on Representation Learning for NLP, in Proceedings of the 7th Workshop on Representation Learning for NLP. Association for Computational Linguistics, 2022, pp. 143–155. doi: 10.18653/v1/2022.repl4nlp-1.15.
  4. 2021

    1. F. Strohm, E. Sood, S. Mayer, P. Müller, M. Bâce, and A. Bulling, “Neural Photofit : Gaze-based Mental Image Reconstruction,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), in 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021, pp. 245–254. doi: 10.1109/ICCV48922.2021.00031.
    2. E. Sood, F. Kögel, F. Strohm, P. Dhar, and A. Bulling, “VQA-MHUG: A gaze dataset to study multimodal neural attention in VQA,” in Proc. ACL SIGNLL Conference on Computational Natural Language Learning (CoNLL), in Proc. ACL SIGNLL Conference on Computational Natural Language Learning (CoNLL). Association for Computational Linguistics, Nov. 2021, pp. 27--43. doi: 10.18653/v1/2021.conll-1.3.
  5. 2020

    1. P. Müller, E. Sood, and A. Bulling, “Anticipating Averted Gaze in Dyadic Interactions,” in ACM Symposium on Eye Tracking Research and Applications, in ACM Symposium on Eye Tracking Research and Applications. Stuttgart, Germany: Association for Computing Machinery, Jun. 2020, pp. 1–10. doi: 10.1145/3379155.3391332.
    2. E. Sood, S. Tannert, D. Frassinelli, A. Bulling, and N. T. Vu, “Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension,” in Proceedings of the 24th Conference on Computational Natural Language Learning, in Proceedings of the 24th Conference on Computational Natural Language Learning. Online: Association for Computational Linguistics, Nov. 2020, pp. 12--25. doi: 10.18653/v1/2020.conll-1.2.
    3. E. Sood, S. Tannert, P. Mueller, and A. Bulling, “Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., in Advances in Neural Information Processing Systems, vol. 33. Curran Associates, Inc., 2020, pp. 6327--6341. [Online]. Available: https://proceedings.neurips.cc/paper/2020/file/460191c72f67e90150a093b4585e7eb4-Paper.pdf
To the top of the page