Individual effects of sleep deprivation may be observed in vocal biomarkers

  Spectral modulations (“timbre” – left panel) and temporal modulations (“linguistic rhythms” – right panel) characteristics of a sleepy voice. These markers of sleepiness have been derived through the explainability of AI, specifically Support Vector Machines, which were trained to recognize vocal samples from sleep-deprived individuals. This study underscores the importance of elucidating AI through […]

Atypical Hemispheric Re-Organization of the Reading Network in High-Functioning Adults with Dyslexia

Participants read words while their neural activity was recorded in an fMRI scanner. The activity in the regions of interest (ROI) highlighted in A, was correlated with word-feature matrices defined by the degree of semantic similarity (SemModel) or orthographic similarity (OrthModel), shown in B. Representational similarity analysis (RSA) revealed atypical hemispheric organization of the reading […]

Event-Related Variability is Modulated by Task and Development

  “In much of the Event Related Potential (ERP) literature, trial-by-trial signal variability is considered unwanted noise to be discarded through the averaging process. An alternative hypothesis is that this variability carries interpretable information. We quantified variability in sensor space by considering three terms: ‘flyby distance’ to a reference ERP template, distance between trials, and […]

Distinct neural mechanisms support inner speaking and inner hearing

  A subjective distinction can be made between inner hearing—invoking speech representations from memory—and inner speaking—simulating the motor act of speech. A TMS study reveals that lip cortical excitability (“MEP”) increases more with inner speaking than with inner hearing, and that this effect is modulated by the phonetic content of what is produced mentally, hence […]

Highlights from ICLB summer school

A few of the distinguished speakers that taught at the 2023 ILCB summer school. From the upper left corner, clockwise: Philipe BLACHE, Patrick LEMAIRE, Chotiga PATTAMADILOK, Christian G. BÉNAR, Sophie ACHARD (Grenoble), Marie MONTANT, Robert (Bob) M. FRENCH (Dijon), Fenna POLETIEK (Leiden, Nijmegen, and IMERA). In the center, the central Nadéra BUREAU. All photos by […]

Upgraded audio stimulation system at the MEG center

The audio-stimulation system of the MEG platform has been upgraded! DATAPixx is now used to deliver sounds signals from outside the magnetically shielded room to the ears of the participant through an Etymotic ER-30 system, which we have modified to achieve flat-frequency response in the 0-6 kHz range — a significant portion of the human […]

Do stereotypes on the speaker affect comprehension of irony? Evidence from neural oscillations

  Social knowledge about a speaker can include stereotypes on their occupation (e.g., more or less prone to sarcastic remarks). This knowledge constrains ironic interpretation early-on, as revealed by the modulations of synchronization we observed in the upper gamma band in the 150–250 ms time window. There was greater synchronization in the ironic context compared […]

Enriched learning: behavior, brain, and computation

The presence of complementary information across multiple sensory or motor modalities during learning, referred to as multimodal enrichment, can markedly benefit learning outcomes. Why is this? Cognitive, neural, and computational theories of enrichment attribute the benefits of enriched learning to either multimodal or unimodal mechanisms.   Figure and legend from Mathias, B., & von Kriegstein, K. […]

Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds

    Visualisation with multidimensional scaling of sound representations in the computational models (top row), and in the brain activity (bottom row). For each model, the ranked dissimilarity matrix is shown. A strong similarity is apparent between representations in Sound-to-event DNNs and in the post-primary auditory cortex (pSTG). Bruno L. Giordano, Michele Esposito, Giancarlo Valente, […]

A multimodal approach for modeling engagement in conversation

The engagement of participants varies a lot during a conversation, with direct consequences on the quality and the success of the interaction. How is this engagement implemented? We propose a new model of engagement based on a multimodal description encompassing as many cues as possible from prosody, gestures, facial expressions, lexicon, and syntax. We used […]