BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ILCB - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.ilcb.fr
X-WR-CALDESC:Events for ILCB
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20150329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20151025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20160327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20161030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20170326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20171029T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160610
DTEND;VALUE=DATE:20160611
DTSTAMP:20260423T132601
CREATED:20190212T172340Z
LAST-MODIFIED:20190212T172343Z
UID:2265-1465516800-1465603199@www.ilcb.fr
SUMMARY:Annotating Information Structure in Authentic Data: From Expert Annotation to Crowd Sourcing Experiments by Detmar Meurers\, Kordula De Kuthy (University of Tübingen)
DESCRIPTION:Annotating Information Structure in Authentic Data: From Expert Annotation to Crowd Sourcing Experiments by Detmar Meurers\, Kordula De Kuthy (University of Tübingen)\nWhile the formal pragmatic concepts in information structure\, such as the focus of an utterance\, are precisely defined in theoretical linguistics and potentially very useful in conceptual and practical terms\, it has turned out to be difficult to reliably annotate such notions in corpus data (Ritz et al.\, 2008; Calhoun et al.\, 2010). We present a large-scale focus annotation effort designed to overcome this problem. Our annotation study is based on the tasked-based corpus CREG (Ott et al.\, 2012)\, which consists of answers to explicitly given reading comprehension questions. We compare focus annotation by trained annotators with a crowd-sourcing setup making use of untrained native speakers. Given the task context and an annotation process incrementally making the question form and answer type explicit\, the trained annotators reach substantial agreement for focus annotation. Interestingly\, the crowd-sourcing setup also supports high-quality annotation\, for specific subtypes of data. To refine the crowd-sourcing setup\, we introduce the Consensus Cost as a measure of agreement within the crowd. We investigate the usefulness of Consensus Cost as a measure of crowd annotation quality both intrinsically\, in relation to the expert gold standard\, and extrinsically\, by integrating focus annotation information into a system performing Short Answer Assessment taking into account the Consensus Cost. Finally\, we turn to the question whether the relevance of focus annotation can be extrinsically evaluated. We show that automatic short-answer assessment indeed significantly improves for focus annotated data.
URL:https://www.ilcb.fr/event/annotating-information-structure-in-authentic-data-from-expert-annotation-to-crowd-sourcing-experiments-by-detmar-meurers-kordula-de-kuthy-university-of-tubingen/
LOCATION:Salle de conférences\, 5 avenue Pasteur\, Aix-en-Provence\, 13100\, France
CATEGORIES:Seminars
END:VEVENT
END:VCALENDAR