Dr. Corinne Fredouille is affiliated to the LIA, Laboratoire d’Informatique d’Avignon. She has worked for over fifteen years on the objective assessment of voice and speech disorders in collaboration with the LPL, Laboratoire Parole et Langage, and other national academic and clinical partners. She is currently involved in the RUGBI project. Corinne is promoted to the rank of professor at Avignon University from September 2021. Congratulations Corinne!
Accentual variation at the word level refers to various ways of pronouncing the same lexical item (Figure A). This kind of variation does not have a contrastive role in French, so it should be processed non-linguistically by native French listeners. This hypothesis was tested in an ABX task by manipulating the ear of presentation, where native (/balɔ̃/ – /baˈlɔ̃/) and non-native (/ˈbalɔ̃/ – /baˈlɔ̃/) accentual pattern were tested.
(A) Phonemic and prosodic profile for the word /balɔ̃/ (“ball”) produced by a male voice (a) in its unaccented version /balɔ̃/, (b) with primary accent on its second syllable /baˈlɔ̃/, and (c) with primary accent on its first syllable /ˈbalɔ̃/. (B) Percentage of correct responses as a function of the ear of presentation, the type of mismatch, and the type of accentual pattern; (a) the native accentual pattern “unaccented /balɔ̃/ – accented 2nd syllable /baˈlɔ̃/” and (b) non-native accentual pattern “accented 1st syllable /ˈbalɔ̃/ – accented 2nd syllable /baˈlɔ̃/”. Error bars represent standard error.
The stimuli A and B varied either in accent (e.g., /ˈbalɔ̃/-/baˈlɔ̃/), in one phoneme (e.g., /baˈlo/-/baˈlɔ̃/), or both in accent and in one phoneme (/ˈbalo/-/baˈlɔ̃/). The native pattern resulted in similar performance with accentual and phonemic differences when stimuli were presented in the left ear, but worst performance with accentual differences than phonemic differences when stimuli were presented in the right ear (Figure B-a). The non-native pattern resulted in persistent difficulty, regardless of the ear of presentation, with worst performance with accentual differences than phonemic differences (Figure B-b).
These results suggest that a native contrast is processed as a non-native contrast when the stimuli were presented to the right ear, and thus when the processing was pushed into the left hemisphere. They also show that, for French native listeners, a non-native accentual contrast never reaches the performance of a native contrast, regardless of the cerebral hemisphere that primarily processes this information. More generally, the study reveals that a right-hemisphere advantage is observed in the processing accentual variation at the word level, and thus that this kind of information is processed as non-linguistic variation.
Dietrich Stout, Thierry Chaminade, Jan Apel, Ali Shafti, & A. Aldo Faisal
Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.
René Westerhausen, Adrien Meguerditchian
The corpus callosum enables integration and coordination of cognitive processing between the cerebral hemispheres. In the aging human brain, these functions are affected by progressive axon and myelin deteriorations, reflected as atrophy of the midsagittal corpus callosum in old age. In non-human primates, these degenerative processes are less pronounced as previous morphometric studies on capuchin monkey, rhesus monkeys, and chimpanzees do not find old-age callosal atrophy. In the present study, we extend these previous findings by studying callosal development of the olive baboon (Papio anubis) across the lifespan and compare it to chimpanzee and human data. For this purpose, total relative (to forebrain volume) midsagittal area, subsectional area, and regional thickness of the corpus callosum were assessed in 91 male and female baboons using non-invasive MRI-based morphometry. The studied age range was 2.5–26.6 years and lifespan trajectories were fitted using general additive modelling. Relative area of the total and anterior corpus callosum showed a positive linear trajectory. That is, both measures increased slowly but continuously from childhood into old age, and no decline was observed in old age. Thus, comparable with all other non-human primates studied to-date, baboons do not show callosal atrophy in old age. This observation lends supports to the notion that atrophy of the corpus callosum is a unique characteristic of human brain aging.
Johanna Liebig, Eva Froehlich, Teresa Sylvester, Mario Braun, Hauke R. Heekeren, Johannes C. Ziegler, Arthur M. Jacobs
Alban Letanneux, Jean-Luc Velay, François Viallet, & Serge Pinto
Introduction: Although the motor signs of Parkinson’s disease (PD) are well defined, nonmotor symptoms, including higher-level language deficits, have also been shown to be frequent in patients with PD. In the present study, we used a lexical decision task (LDT) to find out whether access to the mental lexicon is impaired in patients with PD, and whether task performance is affected by bradykinesia. Materials and Methods: Participants were 34 nondemented patients with PD, either without (off) medication (n = 16) or under optimum (on) medication (n = 18). A total of 19 age-matched control volunteers were also recruited. We recorded reaction times (RTs) to the LDT and a simple RT (control) task. In each task, stimuli were either visual or auditory. Statistical analyses consisted of repeated-measures analyses of variance and Tukey’s HSD post hoc tests. Results: In the LDT, participants with PD both off and on medication exhibited intact access to the mental lexicon in both modalities. In the visual modality, patients off medication were just as fast as controls when identifying real words, but slower when identifying pseudowords. In the visual modality of the control task, RTs for pseudowords were significantly longer for PD patients off medication than for controls, revealing an unexpected but significant lexicality effect in patients that was not observed in the auditory modality. Performances of patients on medication did not differ from those of age-matched controls. Discussion: Motor execution was not slowed in patients with PD either off or on medication, in comparison with controls. Regarding lexical access, patients off medication seemed to (1) have difficulty inhibiting a cognitive-linguistic process (i.e., reading) when it was not required (simple reaction time task), and (2) exhibit a specific pseudoword processing deficit in the LDT, which may have been related to impaired lateral word inhibition within the mental lexicon. These deficits seemed to be compensated by medication.
Michele Scaltritti, Jonathan Grainger, & Stéphane Dufau
We investigated the extent to which accuracy in word identification in foveal and parafoveal vision is determined by variations in the visibility of the component letters of words. To do so we measured word identification accuracy in displays of three three-letter words, one on fixation and the others to the left and right of the central word. We also measured accuracy in identifying the component letters of these words when presented at the same location in a context of three three-letter nonword sequences. In the word identification block, accuracy was highest for central targets and significantly greater for words to the right compared with words to the left. In the letter identification block, we found an extended W-shaped function across all nine letters, with greatest accuracy for the three central letters and for the first and last letter in the complete sequence. Further analyses revealed significant correlations between average letter identification per nonword position and word identification at the corresponding position. We conclude that letters are processed in parallel across a sequence of three three-letter words, hence enabling parallel word identification when letter identification accuracy is high enough.
Elin Runnqvist, Valérie Chanoine, Kristof Strijkers, Chotiga Pattamadilok, Mireille Bonnard, Bruno Nazarian, Julien Sein, Jean-Luc Anton, Lydia Dorokhova, Pascal Belin, & F.-Xavier Alario
An event-related functional magnetic resonance imaging study examined how speakers inspect their own speech for errors. Concretely, we sought to assess 1) the role of the temporal cortex in monitoring speech errors, linked with comprehension-based monitoring; 2) the involvement of the cerebellum in internal and external monitoring, linked with forward modeling; and 3) the role of the medial frontal cortex for internal monitoring, linked with conflict-based monitoring. In a word production task priming speech errors, we observed enhanced involvement of the right posterior cerebellum for trials that were correct, but on which participants were more likely to make a word as compared with a nonword error (contrast of internal monitoring). Furthermore, comparing errors to correct utterances (contrast of external monitoring), we observed increased activation of the same cerebellar region, of the superior medial cerebellum, and of regions in temporal and medial frontal cortex. The presence of the cerebellum for both internal and external monitoring indicates the use of forward modeling across the planning and articulation of speech. Dissociations across internal and external monitoring in temporal and medial frontal cortex indicate that monitoring of overt errors is more reliant on vocal feedback control.
Amandine Michelas & Sophie Dufour
In two ABX experiments using natural and synthetic stimuli, we examined the ability of French listeners to perceive accentual variation by manipulating the ear of presentation. A native (/balɔ̃/-/baˈlɔ̃/) and a non-native (/ˈbalɔ̃/-/baˈlɔ̃/) accentual contrasts were tested. The stimuli A and B varied in accent (/ˈbalɔ̃/-/baˈlɔ̃/), in one phoneme (/baˈlo/-/baˈlɔ̃/) or in both types of information (/ˈbalo/-/baˈlɔ̃/). For the non-native contrast, persistent difficulty was found regardless of the ear of presentation. For the native contrast, better performance was observed when the stimuli were presented to the left ear, and thus when the processing was pushed into the right hemisphere. Our findings also showed that the native contrast was processed as a non-native contrast when the processing was pushed into the left hemisphere. More generally, our study indicates that accentual variation at the word level in French is processed as non-linguistic variation.
This month’s figure is… a figure!
You were recently surveyed regarding the collaborative projects, past and current, you are involved in. We received no less than sixty-six “project sheets”.
These project sheets have been very useful to start preparing the mid-term summary of the institute. The survey is still open, so you can still submit project summaries (please see URL in the emails).