Recent news

Multilevel Linguistic Features Constrain Speech Comprehension

 

 

Humans are experts at processing speech, but how this feat is accomplished remains a major open question. We investigated how speech comprehension is determined by seven different linguistic features, ranging from acoustic modulation rate to contextual lexical information. All these features independently impact the comprehension of accelerated speech, with a clear dominance of the syllabic rate—the orange data-point on the figure. We also derived the channel capacity—i.e., the maximum rate at which information can be transmitted—associated with each linguistic feature. From these observations, we articulate an account of speech comprehension that unifies dynamical, informational and NLP frameworks.

Jérémy Giroud, Jacques Pesnot Lerousseau, François Pellegrino, and Benjamin Morillon. 2023.
The Channel Capacity of Multilevel Linguistic Features Constrains Speech Comprehension.
Cognition 232 (March): 105345.   —  @HAL

Jonathan Grainger

After beginning his career as a CNRS researcher in Experimental Psychology in Paris in 1988, Jonathan Grainger founded and directed the Laboratoire de Psychologie Cognitive (LPC – CNRS & Aix-Marseille University) from 2000 to 2012. During this period, the LPC joined two other CNRS labs to create the Pôle 3C (now the Fédération 3C). He obtained two ERC advanced grants to pursue his work on orthographic processing at the level of single word reading, and more recently when reading multiple words and sentences. He features in the latest podcasts of the Entretiens ILCB / Fed3C.

The Path of Voices in Our Brain

Benjamin Morillon, Luc H. Arnal, and Pascal Belin.

2022. PLOS Biology 20 (7): e3001742. —  @HAL

Categorising voices is crucial for auditory-based social interactions. A recent study by Rupp and colleagues in PLOS Biology capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex.

On the Gestural Origins of Language: What Baboons’ Gestures and Brain Have Told Us after 15 Years of Research

Adrien Meguerditchian.

2022.  Ethology Ecology & Evolution 34 (3): 288–302  —  @HAL

Nonhuman primates mostly communicate not only with a rich vocal repertoire but also with manual and body gestures. In contrast to great apes, this latter communicative gestural system has been poorly investigated in monkeys. In the last 15 years, the gestural research we conducted in the baboons Papio anubis, an Old World monkey species, have shown potential direct evolutionary continuities with some key properties of language such as intentionality, referentiality, learning flexibility as well as its underlying lateralization and hemispheric specialization of the brain. According to these collective findings, which are congruent with the ones reported in great apes, it is thus not excluded that features of gestural communication shared between humans, great apes and baboons, may have played a critical role in the phylogenetic roots of language and dated back, not to the Hominidae evolution, but rather to their much older catarrhine common ancestor 25-40 million years ago.

Space–Time Congruency Effects Using Eye Movements During Processing of Past- and Future-Related Words

Camille L. Grasso, Johannes C. Ziegler, Jennifer T. Coull, and Marie Montant.

2022.  Experimental Psychology 69 (4): 210–17@HAL

In Western cultures where people read and write from left to right, time is represented along a spatial continuum that goes from left to right (past to future), known as the mental timeline (MTL). In language, this MTL was supported by space–time congruency effects: People are faster to make lexical decisions to words conveying past or future information when left/right manual responses are compatible with the MTL. Alternatively, in cultures where people read from right to left, space–time congruency effects go in the opposite direction. Such cross-cultural differences suggest that repeated writing and reading dynamic movements are critically involved in the spatial representation of time. In most experiments on the space–time congruency effect, participants use their hand for responding, an effector that is associated to the directionality of writing. To investigate the role of the directionality of reading in the space–time congruency effect, we asked participants to make lateralized eye movements (left or right saccades) to indicate whether stimuli were real words or not (lexical decision). Eye movement responses were slower and higher in amplitude for responses incompatible with the direction of the MTL. These results reinforce the claim that repeated directional reading and writing movements promote the embodiment of time-related words.

 

The Temporal Voice Areas Are Not ‘Just’ Speech Areas

Régis Trapeau, Etienne Thoret, and Pascal Belin.

2023, Frontiers in Neuroscience 16: 1075288 — @HAL

The Temporal Voice Areas (TVAs) respond more strongly to speech sounds than to non-speech vocal sounds, but does this make them Temporal “Speech” Areas? We provide a perspective on this issue by combining univariate, multivariate, and representational similarity analyses of fMRI activations to a balanced set of speech and non-speech vocal sounds. We find that while speech sounds activate the TVAs more than non-speech vocal sounds, which is likely related to their larger temporal modulations in syllabic rate, they do not appear to activate additional areas nor are they segregated from the non-speech vocal sounds when their higher activation is controlled. It seems safe, then, to continue calling these regions the Temporal Voice Areas.

Prof. Kate Watkins

 

Kate Watkins is a Professor of Cognitive Neuroscience at the Department of Experimental Psychology and St Anne’s College, University of Oxford.  She is the current Holder of the ILCB IMERA Chair and based in Marseille until March 2023.  Kate is interested in the brain processes involved in speech and language.  Her research group uses brain stimulation and brain imaging to examine interactions between auditory and motor systems during speech production and perception. Some of the populations she studies have speech and language disorders including adults with developmental stuttering and children with Developmental Language Disorder (DLD).

Communicative Feedback in Language Acquisition

 

Children communicate and use language in social interactions from a very young age. They experiment with their developing linguistic knowledge and receive valuable feedback from their interlocutors. We formalize a mechanism for language acquisition, whereby children can improve their linguistic knowledge in conversation by leveraging explicit or implicit signals of communication success or failure. Examples for such communicative feedback signals are shown in the figure above. Our review article envisions a more complete understanding of language acquisition within and through social interaction.

Mitja Nikolaus & Abdellah Fourtassi. Communicative Feedback in Language Acquisition. 2023. New Ideas in Psychology 68: 100985@HAL