Two papers of Ekta Sood accepted for top AI and NLP conferences

October 16, 2020 / Sabine Sämisch

Papers of SimTech PhD student Ekta Sood were accepted at top-tier conferences. In her work on “Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention”, which was accepted for publication at NeurIPS 2020 – the thirty-forth Conference on Neural Information Processing Systems, Ekta Sood and others made two distinct contributions. First, they build a robust hybrid text saliency model (TSM) which, for the first time, combines a cognitive model with a data driven approach. Second, they propose a joint modelling approach that allows the TSM to be flexibly adapted to different NLP tasks. They show state-of-the-art performance across two different NLP tasks. Taken together, their findings not only demonstrate the feasibility and significant potential of combining cognitive and data-driven models for natural language processing tasks but also how saliency predictions can be effectively integrated into the attention layer of task-specific neural network architectures to improve performance.

The paper on “Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension” was accepted for publication at CoNLL, a yearly conference organized by SIGNLL (ACL's Special Interest Group on Natural Language Learning). Its core contribution is a new method that leverages eye-tracking data to investigate the relationship between human visual attention and neural attention in machine reading comprehension. In addition, Ekta Sood and others extend the MovieQA dataset with eye tracking data, release this is as open source and present an attentive reading visualization tool that supports users to gain insights when comparing human versus neural attention.

“These are my first, first author papers both accepted at esteemed conferences. Firstly, the prestigious machine learning conference, NeurIPS; where we focused on building a text saliency model that bridges the gap between cognitive models and neural networks. This is a huge accomplishment and I am so grateful to have the support of my co-authors and supervisor. My other first author paper was accepted at a top NLP conference, CoNLL; where we focused on cognitively inspired methods to analyze and interpreting the neural attention models in machine reading comprehension. Both acceptances are very exciting and I am so happy that our hard work paid off. I would like to give a big thank you to co-authors for their support and team effort :) Also to my colleagues for their feedback and suggestions.”, says Ekta Sood on the occasion of her papers being accepted.

Congratulations!


Abstract
Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
(Ekta Sood, Simon Tannert, Philipp Müller, Andreas Bulling)

A lack of corpora has so far limited advances in integrating human gaze data as a supervisory signal in neural attention mechanisms for natural language processing (NLP). Ekta Sood and others propose a novel hybrid text saliency model (TSM) that, for the first time, combines a cognitive model of reading with explicit human gaze supervision in a single machine learning framework. They show on four different corpora that the hybrid TSM duration predictions are highly correlated with human gaze ground truth. They further propose a novel joint modelling approach to integrate the predictions of the TSM into the attention layer of a network designed for a specific upstream task without the need for task-specific human gaze data. The authors demonstrate that their joint model outperforms the state of the art in paraphrase generation on the Quora Question Pairs corpus by more than 10% in BLEU-4 and achieves state-of-the-art performance for sentence compression on the challenging Google Sentence Compression corpus. As such, their work introduces a practical approach for bridging between data-driven and cognitive models and demonstrates a new way to integrate human gaze-guided neural attention into NLP tasks.

Abstract
Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
(Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling and Ngoc Thang Vu

While neural networks with attention mechanisms have achieved superior performance on many natural language processing tasks, it remains unclear to which extent learned attention resembles human visual attention. In her paper on “Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension”, Sood proposes a new method that leverages eye-tracking data to investigate the relationship between human visual attention and neural attention in machine reading comprehension. To this end, they introduce a novel 23 participant eye tracking dataset - MQA-RC, in which participants read movie plots and answered pre-defined questions. Sood and others compare state of the art networks based on long short-term memory (LSTM), convolutional neural models (CNN) and XLNet Transformer architectures. The result? Higher similarity to human attention and performance significantly correlates to the LSTM and CNN models. However, they show this relationship does not hold true for the XLNet models – despite the fact that the XLNet performs best on this challenging task. The results suggest that different architectures seem to learn rather different neural attention strategies and similarity of neural to human attention does not guarantee best performance. 

To the top of the page