- This event has passed.
Special ILCB Workshop “Speech, Language and the Brain”
Special ILCB Workshop “Speech, Language and the Brain”
Salle des Voutes – Faculté St Charles Marseille
Sonja Kotz : Dept. of Neuropsychology and Psychopharmacology, Maastricht University
Ken Pugh : President and Director of Research at Haskins Laboratories
Anne-Lise Giraud : University of Geneva
Karsten Steinhauer : School of Communication Sciences and Disorders, Faculty of Medicine, McGill University
- 9h – 9h45 Prof. Sonja Kotz : Cortico-subcortico-cortical circuitry involvement in perception and speech
- 9h45 – 10h30 Prof. Kenneth R. Pugh : Building the literate brain: How learning to read depends upon, and changes, brain organization for spoken language.
- 11h – 11h45 Prof. Anne-Lise Giraud : Speech processing with (and without) cortical oscillations
- 11h45 – 12h30 Prof. Karsten Steinhauer : Eliciting ERP components for morphosyntactic agreement mismatches in grammatical sentences
- 12h30 à 14h Lunch
Speech processing with (and without) cortical oscillations
Perception of connected speech relies on accurate syllabic segmentation and phonemic encoding. These processes are essential because they determine the building blocks that we can manipulate mentally to understand and produce speech. Speech segmentation and encoding might be underpinned by specific interactions between the acoustic rhythms of speech and coupled neural oscillations in the theta and low-gamma band. To address how neural oscillations interact with speech and intervene in phonological processing, we use neurocomputational models, and establish that recognition performance are generally better with than without neural oscillations. Based on these models we hypothesize that if low-gamma oscillations are disrupted speech perception might cause difficulties to map phonemic with graphemic representations when learning to read. Using MEG, and EEG combined with fMRI, and neurostimulation, we found that dyslexia is associated with a specific deficit in low-gamma activity in auditory cortex, and that this deficit alone accounts for several facets of the disorder. We further show that boosting 30Hz neural activity in left auditory cortex using transcranial alternative current stimulation selectively improves phonological performance and reading efficiency. Altogether these results suggest a causal role of oscillatory processes in speech perception.
Cortico-subcortico-cortical circuitry involvement in perception and speech
Wile the role of the cerebellum, less is known about its contributions to perception and speech. Considering temporo-cerebellar-thalamo-cortical circuitry and its respective connectivity patterns, cerebellar contributions should be further explored across domains as they (i) simulate cortical information processing and (ii) compare expected and actual outcomes of stimulation, leading to adaptation in cortical target areas. I will discuss frameworks and present empirical evidence encompassing action, perception, and speech in support of this idea.
Building the literate brain: How learning to read depends upon, and changes, brain organization for spoken language.
Kenneth R. Pugh
The development of skilled reading involves a major re-organization of language systems in the brain. We will present ongoing research from our lab on the genetic and neurobiological foundations of learning to read across writing systems, with particular focus on bi-directional dependencies between brain pathways that are critical in linking spoken and written language. Our research suggests that print/speech convergence in language cortex accounts for individual differences in reading outcomes in high and low risk learners. New longitudinal findings from our lab using computational models to better understand critical gene-brain-behavior connections in early language and speech motor development and reading and are discussed in detail (including new findings with magnetic resonance spectroscopy and multimodal brain imaging that reveals how excitatory and inhibitory neurochemistry moderates language and reading development in high risk children). Finally, we discuss recent studies that extend this brain research into second language learning.
Eliciting ERP components for morphosyntactic agreement mismatches in grammatical sentences
French subject-verb agreement has specific properties relevant to the study of agreement processing, which have not been systematically studied in the ERP literature. Furthermore, there is increasing interest in ERP methods that do not rely on violation paradigms. Therefore, we examined whether the auditory presentation of a grammatical sentence in French combined with a picture that doesn’t match its morphosyntactic features would elicit the same ERP components as in classic error-based paradigms. We created various cross-modal number (singular/plural) mismatches to elicit LAN-P600 for agreement mismatches (Molinaro et al, 2011; Royle et al, 2013) and semantic verb/action (rather than noun/object) mismatches to elicit N400s (Royle et al., 2013; Willems et al, 2008).
Twenty-eight French-speaking adults listened to sentences describing scenes depicted while their EEG was recorded. We varied the type and amount of number cues available in each sentence using two manipulations. First, we manipulated the verb type, using either verbs whose number cue was audible through subject (clitic) pronoun liaison (LIAISon verbs: e.g., elle/s aime/nt [ɛlɛm]/[ɛlzɛm] ‘she/they love’), or verbs whose number cue was audible on the verb ending (CONSonant-final verbs, e.g., il/s rugi-t/-ssent [ilʁyʒi]/[ilʁyʒɪs] ‘he/they roars/roar’). Second, we manipulated the sentence-initial context: each sentence was preceded either by a neutral context (e.g., In the evening) providing no number cue, or by a subject noun phrase (NP, e.g., Les lions [lelijɔ̃] ‘The lions’) containing a subject number cue on the determiner. Number mismatches were created through mismatches between the number of visually-presented agents and morphosyntactic number cues in the auditory stimuli.
Accuracy for acceptability judgments was nearly at ceiling throughout our conditions (86.5% to 97.6%). As expected, the semantic action/verb mismatch elicited classic N400s followed by additional negativities. Number mismatches in sentence-initial contexts elicited broadly distributed N400s followed by a P600, suggesting that non-linguistic visual information can be immediately used (in less than 500 ms) to make strong predictions about appropriate linguistic representations. For number mismatches disambiguated on LIAIS verbs, we observed an early-onset sustained anterior negativity (eAN), followed by a centro-parietal N400 and a P600, indicating that eANs are not specific to phrase structure violations (Hasting & Kotz, 2008; contra Friederici, 2002, 2011). CONS verbs elicited an eAN which faded due to an overlapping P600 and reappeared after the P600, a pattern previously described for various syntactic violations in auditory ERP studies (Steinhauer & Drury, 2012). Thus, eAN and P600 temporarily cancelled each other out. The fact that the frontal negativity lasted beyond the P600 duration (as in previous auditory agreement studies, e.g., Hasting & Kotz, 2008) suggests that the P600 does not always reflect the final stage of sentence evaluation processes.
The present study demonstrates for the first time that perfectly grammatical sentences can elicit classic ERP components usually found in morpho-syntactic violation paradigms. We discuss how distinct psycholinguistic processes modulated the ERPs as a function of (1) number (singular vs plural mismatch) and (2) type of mismatch disambiguation (determiner, LIAIS and CONS verb). Possible applications of this new cross-modal paradigm in developmental research will also be addressed.