A holistic measure of inter-annotation agreement with continuous data

via zoom

Rachid Riad (École Normale Supérieure - Inria - Inserm) Abstract: Inter-rater reliability/agreement measures the degree of agreement among raters to describe, code or assess the same phenomenon. Most coefficients (ex: α, κ) measuring these agreements in psychology and natural sciences focus on the categorization of events. Yet, the annotations of speech and especially conversational spontaneous […]