Emotion Annotations: Understanding Annotators' Disagreements

Enrica Troiano

Institute for NLP, University of Stuttgart

Analysing emotions in text consists in automatically understanding its emotional content. This includes a number of phenomena, from basic, discrete emotions to more fine-grained affective information, like intensity. Similar to most ML-based tasks, emotion analysis relies on manually annotated data, thus facing the problem of annotation subjectivity: it is particularly challenging to achieve substantial agreement on emotions.

In this presentation, I will address two annotation tasks, which face separate issues that lead to disagreements. In one setting, human judges infer emotion intensities, and in the other, they annotate specific emotion components (cognitive appraisal). I will show that annotations of intensity correlate both with the confidence of annotators and with their agreement. For cognitive appraisal annotations, I will discuss that reconstructing emotion components from descriptions of event is particularly challenging if annotators are not provided additional emotional information.

This is joint work with Jan Hofmann, Roman Klinger, and Sebastian Padó.

Week 19 2020/2021

Thursday 11th March 2021
1:00-2:00pm

Online: join mailing list or contact organisers to receive link