BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ILCB - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.ilcb.fr
X-WR-CALDESC:Events for ILCB
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20230609T090000
DTEND;TZID=Europe/Paris:20230609T110000
DTSTAMP:20260410T175857
CREATED:20230516T111611Z
LAST-MODIFIED:20230516T111719Z
UID:33748-1686301200-1686308400@www.ilcb.fr
SUMMARY:Word learning in honor of Lila Gleitman: Perception of structure from language and world
DESCRIPTION:John Trueswell (Dept of Psychology\, University of Pennsylvania) \nAbstract: It is tempting to conclude that children learn the meanings of words by observing their circumstances of use (e.g.\, observing that the word “dog” often co-occurs with dog-sightings). If this is the case though\, how do children ever learn the vast majority of the words that they know? Consider most of the words in this abstract\, many of which a 3-year-old produces and understands: like “what”\, “not”\, “language”\, “do”\, “think”\, “learn.” Can these words be learned by observation of their circumstances of use? There are no what-sightings that go with “what”\, and no not-sightings that go with “not”; thinking-sightings often look like sleeping-sightings and sitting-sightings. How do children go about learning these “hard words” despite no explicit instruction? I will present research\, some of which was done with my longtime collaborator Lila Gleitman\, that is designed to answer these questions. I’ll focus on the unexpected role that word-to-world pairings nevertheless play in the learning of hard words. I’ll propose a framework for word-to-world mapping in which perception of the referent world itself offers us significant structure\, and the syntactic structure we gather from the language is connected to these representations. This connection\, and the structural representations on both sides of the word-to-world coin\, allow us to see what we shouldn’t be able to see\, and hear what we shouldn’t be able to hear. I’ll offer experimental evidence that our perception of the world includes rapid extraction of event structure\, and hypothesize that this allows access to abstract relational meaning even in young children. These representations play an important role in understanding how situational contexts permit children to learn even the most abstract of terms\, such as symmetrical predicates (e.g.\, the meaning of “equal”) and truth-functional negation (e.g.\, the meaning of “not”).
URL:https://www.ilcb.fr/event/word-learning-in-honor-of-lila-gleitman-perception-of-structure-from-language-and-world/
LOCATION:Salle 9-050\, Université Aix-Marseille Campus St Charles 3 Pl. Victor Hugo\, Marseille\, France
CATEGORIES:Seminars
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20230621T140000
DTEND;TZID=Europe/Paris:20230621T150000
DTSTAMP:20260410T175857
CREATED:20230606T152634Z
LAST-MODIFIED:20230606T152634Z
UID:33806-1687356000-1687359600@www.ilcb.fr
SUMMARY:Interactive Robot Learning
DESCRIPTION:Summary: In this talk\, we focus on main methods and models enabling humans to teach embodied social agents such as social robots\, using natural interaction. Humans guide the learning process of such agents by providing various teaching signals\, which could take the form of feedback\, demonstrations and instructions. This overview describes how human teaching strategies are incorporated within machine learning models. We detail the approaches by providing definitions\, technical descriptions\, examples and discussions on limitations. We also address natural human biases during teaching. We then present applications such as interactive task learning\, robot behavior learning and socially assistive robotics. Finally\, we discuss research opportunities and challenges of interactive robot learning. \nBio: Prof. Mohamed Chetouani is currently a Full Professor in signal processing and machine learning for human-machine interaction. He is affiliated to the PIRoS (Perception\, Interaction et Robotique Sociales) research team at the Institute for Intelligent Systems and Robotics (CNRS UMR 7222)\, Sorbonne University (formerly Pierre and Marie Curie University). His activities cover social signal processing\, social robotics and interactive machine learning with applications in psychiatry\, psychology\, social neuroscience and education. He was the coordinator of the ANIMATAS H2020 Marie Sklodowska Curie European Training Network (2018-2022). Since 2019\, he is the President of the Sorbonne University Ethics Committee. He was involved in several educational activities including organization of summer schools. He is member of the EU Network of Human-Centered AI. He is General Chair of ACM ICMI 2023. He is in charge of the inclusion of Students with Disabilities for the Faculty of Science and Engineering of Sorbonne University.
URL:https://www.ilcb.fr/event/interactive-robot-learning/
LOCATION:FRUMAM\, 3 place Victor Hugo\, Marseille\, 13001\, France
CATEGORIES:Seminars
END:VEVENT
END:VCALENDAR