Support for research projects
Supervisors: Richard Kronland-Martinet (PRISM) & Valentin Emiya (LIS) / Stéphane Ayache (LIS)
Collaborations: Bruno Torresani (I2M)
Summary
Extracting meaning from sounds is a crucial ability for humans to communicate.
Although transformations of the physical vibrations into a neural activity conveyed by the peripheral auditory system are reasonably well known, non-linear transformations carried out at higher cortical levels remain remarkably poorly understood.
However, understanding these transformations is crucial to significantly improve the knowledge on auditory cognition by linking the properties of sounds to their perceptual and behavioral outcomes.
Simultaneously, progress in the field of machine learning now allows for the training of deep neural networks that are able to reproduce complex cognitive tasks, such as musical genre classification or words recognition (Kell et al., 2018), and even to generate realistic sounds such as those produced by a human (Van Den Oord et al., 2016).
These frameworks hence provide artificial auditory systems that compete with and sometimes outmatch human abilities.
However, interpreting the transformations carried out by these “black boxes” remains a crucial challenge in particular to understand which acoustic information they use to achieve these tasks.
This post-doc project aims to capitalize on the unique expertise of three ILCB laboratories to address this challenge with Richard Kronland-Martinet at the PRISM lab carrying expertise is the field of auditory cognition and sound synthesis, and with Valentin Emiya and Stéphane Ayache at the LIS, respectively experts in the fields of signal processing and machine learning.
In addition to this supervision, the post-doc will benefit of collaborations with other ILCB members and in particular with Bruno Torrésani at I2M, expert researcher on mathematical representations of sounds.
The project will be more specifically organized in three tasks (1) training deep networks and metrics to have similar performances to that of humans (LIS) (2) interpreting these computational frameworks in the light of neuromimetic mathematical representations of sounds (LIS/PRISM) (3) evaluating the perceptual plausibility of these representations by the production of sounds (PRISM).
This project, at the intersection of computational auditory cognition, machine learning, and signal processing, will set the foundation for a systematic investigation of auditory representations by developing a methodology to train networks, probe their internal representations, and evaluate their perceptual relevance.
In this sense, this project will manage to leverage new transdisciplinary synergies in the ILCB through the interlocking of complementary scientific methodologies.
Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V., & McDermott, J. H. (2018). A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3), 630-644.
Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., ... & Kavukcuoglu, K. (2016). WaveNet: A generative model for raw audio. SSW, 125.
Post-doc topic proposed in co-supervision between: Benjamin Morillon (Institut de Neurosciences des Systèmes, INS)
Kristof Strijkers (Laboratoire Parole et Langage, LPL)
(potential) ILCB collaborators: Daniele Schon (INS), Andrea Brovelli (INT), Elin Runnqvist (LPL), Marie Montant (LPC)
(potential) external collaborators: Anne-Lise Giraud (Geneva, Switzerland), Sonja Kotz (Maastricht, Netherlands), Friedemann Pulvermuller (Berlin, Germany)
While traditional models proposed a strict separation between the activation of motor and sensory systems for the production versus perception of speech, respectively, by now most researchers agree that there is much more functional interaction between sensory and motor activation during language behavior. Despite this increasing consensus that the integration of sensorimotor knowledge plays an important role in the processing of speech and language, much less consensus exists on what that exact role may be as well as the functional mechanics that could underpin it.
Indeed, many questions from various perspectives remain open issues in the current state-of-the-art:
Is the role of sensorimotor activation modality-specific in that it serves a different functionality in perception than production?
Is it only relevant for the processing of speech sounds or does it also play a role in language processing and meaning understanding in general?
Can sensory codes be used to predict motor behavior (production) and can motor codes be used to predict sensory outcomes (perception)?
And if so, how are such predictions implemented at the mechanistic level (e.g., do different oscillatory entrainment between sensory and motor systems reflect different dynamical and/or representational properties of speech and language processing)?
And in which manner can such sensorimotor integration go from arbitrary speech sounds to well-structured meaningful words and language behavior?
The goal of this project is to advance on our understanding on these open questions (in different ‘sub-topics’) taking advantage of the complementary knowledge of the supervisors, with B. Morillon being an expert on the cortical dynamics of sensorimotor activation in the production and perception of speech, and K. Strijkers being an expert on the cortical dynamics of sensorimotor activation in the production and perception of language.
At the center of the project, and as connecting Red Thread, is the shared interest of the supervisors in the role of ‘time’ (temporal coding) as a potential key factor that causes sensorimotor activation to bind during the processing of speech and language.
Upon this view, ‘time’ transcends its classical notion as a processing vehicle (i.e., simple propagation of activation from sensory to motor systems and vice versa) and may reflect representational knowledge of speech and language.
One of the main goals of the current project is thus to test the hypothesis that temporal information between sensory and motor codes serves a key role in the production and perception of speech and language.
More specifically, we will explore whether sensorimotor integration during speech and language processing reflects: (a) the prediction of temporal information; (b) the temporal structuring of speech sounds and articulatory movement; (c) the temporal binding of phonemic and even lexical elements in language.
Projet postdoctoral supervisé (proposition) par Sophie Dufour (LPL) et Jean-Luc Schwartz (GIPSA-lab)
Durée : 2 ans
La perception et la production de la parole mettent en œuvre une série de processus cognitifs qui peuvent être observables et caractérisés fonctionnellement par des expériences psycholinguistiques.
Or, curieusement, ces processus sont le plus souvent étudiés indépendamment.
L’ambition de ce projet est d’examiner s’il existe des liens entre perception et production de parole dans un même processus phonologique (manipulation de règles computationnelles et à catégorisation de phonèmes).
En français continental, il existe deux variétés qui différent par leurs systèmes phonologiques.
En effet, le français du sud (SF) n’oppose pas les phonèmes /ɛ/ et /e/ alors qu’ils contrastent en français du nord (NF). En SF, les deux variantes existent, mais il s’agit d’allophones dérivés par une règle computationnelle régie par la structure syllabique, avec [ɛ] en contexte CVC. e.g. [fɛt] « fête », mais [e] en contexte CV. e.g. [fete] « feter ».
Les études comportementales permettent de mettre en lumière d’éventuelles différences de temps de réponses entre différents processus phonologiques ralentissant ainsi l’accès lexical et par conséquent la production et/ou la perception de parole.
Parallèlement, les approches par exploration cérébrale en Électroencéphalographie (EEG) permettent de modifier/confirmer/affiner les résultats comportementaux. Le plus souvent, les études en EEG sont focalisées sur la perception de parole.
Néanmoins, quelques études montrent qu’il est possible d’adapter le paradigme d’EEG à la production de parole (Indefrey & Levelt 2004; Sain et al. 2009 ; Sato & Shiller, 2018).
Ce type d’expérience nécessite un important travail de posttraitement visant à débruiter les signaux EEG des artefacts des mouvements musculaires de la production de parole (Vos et al. 2010) pour obtenir des ERPs propres.
Deux séries d’expériences ont été effectuées au LPL (Aix-en-Provence) et à GIPSA-lab (Grenoble) en collaboration avec Sophie Dufour, Noël Nguyen et Jean-Luc Schwartz, l’une présentée à Labphon 2018 (expérience comportementale sur la production) et l’autre soumise à la revue Neuroscience Letters (données EEG en perception).
Les résultats de ces travaux nous poussent à mener d’autres investigations centrées sur les liens perception-production, et combinant différentes approches sur les mêmes sujets pour évaluer les corrélations à travers différents paradigmes, pour en caractériser les principes communs et les représentations partagées.
Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1), 101–144. Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., & Halgren, E. (2009). Sequential processing of lexical, grammatical, and phonological information within Broca’s area. Science, 326(5951), 445-449. Sato, M., & Shiller, D. M. (2018). Auditory prediction during speaking and listening. Brain & Language, 187, 92-103 Vos, D. M., Riès, S., Vanderperren, K., Vanrumste, B., Alario, F. X., Huffel, V. S., & Burle, B. (2010). Removal of muscle artifacts from EEG recordings of spoken language production. Neuroinformatics, 8(2), 135-150.
Functional organization of the Visual Word Form Area and its communication with the spoken language system: evidence from fMRI and intra-cerebral EEG recording
Supervisor : Chotiga Pattamadilok (Laboratoire Parole et Langage, Aix-en-Provence), Dr. Agnès Trébuchon (Institut de Neurosciences des Systèmes; Timone Hospital, Marseille) and Anne-Sophie Dubarry (Laboratoire Parole et Langage, Aix-en-Provence).
Reading acquisition establishes the functional and anatomical connections between the auditory and visual systems. Interestingly, this recurrent communication between the two systems also induces more profound changes in the activity and the property of neurons within each sensory system itself (Dehaene et al., 2010). Our recent study using a combination of Transcranial Magnetic Stimulation and an adaptation protocol (Pattamadilok, Planton, & Bonnard, 2019) showed that the Visual Word Form Area (VWFA), i.e., the key area of the reading network, not only contains neurons that encode orthographic information as currently assumed, but also those that encode phonological information. The emergence of these spoken language coding neurons in the ventral visual pathway could be considered as cortical reorganization subsequent to learning to read.
The present proposal aims to further investigate 1) the fine-scale spatial organization of different (functionally segregated) neuronal populations within the VWFA and 2) the temporal dynamic of the communication between this area and those that belong to the spoken language network. The first issue will be addressed in a fMRI study using a cross-modal activation protocol. Both univariate analysis and multivariate Representational Similarity Analysis will be applied to examine fine-grained patterns of activity within the VWFA. The second issue will be addressed using an intracerebral EEG recording in epileptic patients. In addition to a better understanding of the theoretical questions mentioned above, the present project will contribute to an ongoing elaboration of a cerebral cartography for pre-surgical evaluations of epileptic patients and to the development of MIA toolbox, software for analysis of intracerebral EEG signals over multiple patients (https://github.com/annesodub/mia).
We are looking for a candidate with a background in cognitive neuroscience, experience in functional MRI (experimental design, data acquisition, preprocessing, analysis) and relevant programming skills (e.g., Matlab). Experience with MEG or EEG is a plus.
Interested candidates can contact C. Pattamadilok via email (chotiga.pattamadilok@lpl-aix.fr). A CV with complete list of publications, a letter of motivation (1-2 pages) and a letter of recommendation or contact information of a potential referee will be requested at a later stage.
References
Dehaene, S., Pegado, F., Braga, L. W., Ventura, P., Nunes Filho, G., Jobert, A., … Cohen, L. (2010). How Learning to Read Changes the Cortical Networks for Vision and Language. Science, 330(6009), 1359–1364.
Pattamadilok, C., Planton, S., & Bonnard, M. (2019). Spoken language coding neurons in the Visual Word Form Area: Evidence from a TMS adaptation paradigm. NeuroImage, 186, 278–285.
Supervisors: Sophie Dufour & Amandine Michelas
In contrast to languages such as Spanish, in French, the position of accent within a word does not change its meaning (i.e. /'bebe/ “s/he drinks” vs. /be'be/ “baby” in Spanish, whereas in French the two forms mean the same word, ‘baby’).
In French, the main accent, called primary accent, affects the last syllable of a larger unit than the word, that is, the accentual phrase.
For instance, the monosyllabic word chat “cat” receives primary accent in the following sentence un petit 'chat “the little cat” because it is the last full syllable of the accentual phrase.
In contrast, it is unaccented in the sentence un chat 'triste “the sad cat” because it is not in final position within the accentual phrase.
To this date, there are numerous demonstrations that in French, accent is used in syntactic parsing and in the segmentation of continuous speech into words (Christophe et al., 2004; Spinelli et al., 2010).
However, the role of accent in spoken word recognition is still poorly documented.
Since French speakers are inevitably exposed to both the accented and unaccented versions of words, models assuming the storage of multiple variants (Connine, 2004; Goldinger, 1998) predict that accent in French could be represented in the mental lexicon. In this PhD project, using both EEG and behavioral experiments, we will examine how accent is represented in French and how it affects spoken word recognition.
PhD candidates will be expected to have a background in psycholinguistics and/or phonetics and to demonstrate an interest in word recognition and prosody.
Christophe, A., Peperkamp, S., Pallier, C., Block, E., & Mehler, J. (2004). Phonological phrase boundaries constrain lexical access I. Adult data, Journal of Memory and Language, 51, 523-547. Connine, C. M. (2004). It’s not what you hear, but how often you hear it: On the neglected role of phonological variant frequency in auditory word recognition. Psychonomic Bulletin & Review, 11, 1084–1089. Goldinger, S. D. (1998). Echoes of echoes? An episodic theory of lexical access. Psychological Review, 105, 251–279. Spinelli, E., Grimault, N., Meunier, F., & Welby, P. (2010). An intonational cue to word segmentation in phonemically identical sequences, Attention, Perception, & Psychophysics, 72, 775-787.
Location: Institut de Neurosciences de la Timone, Marseille, France.
Principal Investigators: Dr. Bruno L. Giordano (Institut de Neurosciences de la Timone, Marseille);
Prof. Thierry Artières (Laboratoire d’Informatique et Systèmes, Marseille).
Collaborator: Dr. Christian G. Bénar (Institut de Neurosciences de Systèmes, Marseille).
We learn about the acoustical environment through a variety of tasks, such as discriminating, categorizing and identifying diverse sound sources across many domains (environmental sounds, music, voice, speech, etc.; Giordano et al., 2013, 2014).
The ability to perform many different tasks across multiple domains is a key aspect of behavioral flexibility, and is thought to rely on the function of the prefrontal cortex, a structure that subserves flexible inference and task representations for effective learning (Cao et al. 2019; Cole et al., 2013).
Computational models, however, often struggle with such flexibility (catastrophic forgetting in deep neural networks - DNNs; cf. Yang et al., 2019), and overlook prefrontal functions in the perception and learning of sound-generating sources (Kell et al., 2018; cf. Jiang et al., 2018).
As a consequence, it is currently unknown how the auditory system readily learns to perform multiple tasks in a short time, and how cerebral representations are formed to guide behavior (e.g., learning rate) throughout the learning process.
One candidate strategy to achieve this flexibility is that the brain represents multiple tasks as a small number of their underlying components and uses knowledge of this generalizable structure to facilitate learning (Reverberi et al., 2012).
To investigate this hypothesis, we will carry out a task-rich magnetoencephalography (MEG) study using sounds of natural sources (speakers, musical instruments, non-music non-living objects).
This project will in particular capitalize on the superior unsupervised learning ability of the auditory system (Goudbeek et al., 2017) to disentangle statistical and rule-based learning effects on the trial-by-trial evolution of cerebral and behavioral responses, and to examine the role of dissimilarity estimation as a core component of task representation in the brain (Ashby & Valentin, 2017).
Key aims include: (1) using MVPA to track the trial-by-trial evolution of task representations in source-localized MEG data (Cao et al., 2019; GIORDANO, BÉNAR); (2) developing novel neural-network models of task compositionality (e.g., DNNs trained with continual learning and unsupervised methods; Chen et al., 2018; Yang et al., 2019; ARTIÈRES); (3) using connectivity methods to pinpoint network hubs associated with task representations and task-modulated information transfer (Cole et al., 2013; Giordano et al., 2017; GIORDANO).
The post-doctoral fellow will lead the design, execution and analysis for this MEG study on task representation in the human auditory system. The ideal candidate will have a strong background in computational modelling of behavior as applied to the multivariate analysis of MEG data, be proficient in Matlab and Python, and show evidence of the ability to lead a scientific project under the supervisions of multiple PIs and of the commitment to publish in high profile journals.
Candidates should send their CV, two reference letters and a motivation letter to:
Bruno L. Giordano (bruno.giordano@univ-amu.fr)
Thierry Artières (thierry.artieres@lis-lab.fr)
References
Ashby, G., & Valentin, V. (2017). Multiple systems of perceptual category learning: Theory and cognitive tests. In H. Cohen & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (pp. 157-188). San Diego, CA, US: Elsevier Academic Press.
Cao, Y., Summerfield, C., Park, H., Giordano, B. L., & Kayser, C. (2019). Causal inference in the multisensory brain. Neuron, in press.
Chen, M., Denoyer, L., Artières, T. (2018). Multi-view data generation without view supervision. International Conference on Learning Representations (ICLR).
Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., & Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nature Neuroscience, 16(9), 1348.
Giordano, B. L., McAdams, S., Zatorre, R. J., Kriegeskorte, N., & Belin, P. (2013). Abstract encoding of auditory objects in cortical activity patterns. Cerebral Cortex, 23(9), 2025-2037.
Giordano, B. L., Pernet, C., Charest, I., Belizaire, G., Zatorre, R. J., & Belin, P. (2014). Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus. Cortex, 58, 170-185.
Giordano, B.L., Ince, R.A., Gross, J., Schyns, P.G., Panzeri, S. and Kayser, C. (2017). Contributions of local speech encoding and functional connectivity to audio-visual speech perception. Elife, 6, p.e24763.
Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2017). Auditory and phonetic category formation. In Handbook of Categorization in Cognitive Science (pp. 687-708). Elsevier.
Jiang, X., Chevillet, M. A., Rauschecker, J. P., & Riesenhuber, M. (2018). Training humans to categorize monkey calls: auditory feature-and category-selective neural tuning changes. Neuron, 98(2), 405-416.
Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V., & McDermott, J. H. (2018). A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3), 630-644.
Reverberi, C., Görgen, K. & Haynes, J.-D. (2012). Compositionality of rule representations in human prefrontal cortex. Cerebral Cortex, 22, 1237–1246.
Yang, G. R., Joglekar, M. R., Song, H. F., Newsome, W. T., & Wang, X. J. (2019). Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience, 22(2), 297.
Supervisors: Olivier Coulon (Institut de Neurosciences de la Timone),
Adrien Meguerditchian (Laboratoire de Psychologie Cognitive, AMU, Marseille, France), W. Hopkins (Neuroscience Institute and Language Research Center, Georgia State University, Atlanta USA)
location: MeCA team, Institut de Neurosciences de la Timone, Marseille, France.
The MeCA team at INT has developed a human cortical organization model that provide a statistical description of relative position, orientation, and long-range alignment of cortical sulci on the surface of the cortex [1].
This model can be implemented on the surface of any individual (extracted from MR images), and provides inter-subject comparisons and cortical parcellation [2].
The goal of this project is to build new models for non-human primate species.
Starting from the human model, a nested sub-model can be developed for chimpanzees, from which in turn a model for baboons can be build, then again for macaques.
This series of nested models will define a hierarchy of cortical complexity, and will provide the mean to transport any cortical information (functional, anatomical, geometrical) from one species to another and to perform direct inter-species comparisons.
A proof of concept has already been proposed for humans and chimpanzees [3, Fig.1].
The post-doctoral fellow will develop complete models for chimpanzees, baboons, and macaques, and apply them to study local cortical expansions across species, as well as to compare the localization of known cortical asymmetries across species.
Models and associated tools will be made available to the neuroimaging community via the BrainVisa software platform.
The candidate will use existing tools and adapt them to new species.
MR image databases will be provided for each species.
Basic knowledge of programming languages such as Matlab or Python is expected, as well as a strong interest in neuroimaging and/or computational anatomy. [1] Auzias G, Lefèvre J, Le Troter A, Fischer C, Perrot M, Régis J, Coulon O (2013). Model-driven Harmonic Parameterization of the Cortical Surface: HIP-HOP, IEEE Trans Med Imaging, 32(5):873-887. [2] Auzias G, Coulon O, Brovelli A (2016) MarsAtlas : A cortical parcellation atlas for functional mapping, Human Brain Mapping 37(4), p. 1573-1592 [3] Coulon O, Auzias G, Lemercier P, Hopkins W (208).
Nested cortical organization models for human and non-human primate inter-species comparisons.
Int. Conference of the Organization for Human Brain Mapping.
Supervisors: Joël Fagot, Nicolas Claidière (Laboratoire de Psychologie Cognitive), Noel Nguyen (Laboratoire Parole et Langage)
Transverse question 1: “Precursors of Language”
Contact: Dr. J. Fagot, joel.fagot@univ-amu.fr, Webpage: https://lpc.univ-amu.fr/fr/profile/fagot-joel Call
Cumulative culture in nonhuman primates and the evolution of language
Children learn a language by being exposed to the speech production of speakers of that language, they then become speakers themselves.
This process of iterated learning largely explains why language evolve through time: every generation, the changes that are introduced by new generations of speakers are passed on to future generations.
Experiments involving transmission chains can capture such process.
For instance, Kirby, Cornish, and Smith (2008) introduced a non-structured language (random associations between a set of visual stimuli and artificially constructed labels) as input in transmission chains and found that this language became progressively more structured and easier to learn. However, the importance of iterated learning in determining the structure of a language is difficult to evaluate in humans, because humans have necessarily already acquired a language before participating in experiments.
That first acquisition will then inevitably guide the evolution of the experimental language according to the principles just described (participants will be biased by their first language).
Studies with non-human animals, such as baboons, can overcome this difficulty and the proposed project is to explore the effect of iterated learning on language like structures in the baboon, a nonhuman, nonlinguistic, primate species.
The post-doc will be based at the CNRS primate station in Rousset (nearby Aix-en-Provence), and will work with a word-unique “primate cognition and behavior plateform” where baboons can interact freely with experiments presented on touch screens (for a range of experiments using this system see https://www.youtube.com/watch?v=6Ofd8cHVCYM).
This platform has previously been used to present transmission-chain experiments to baboons.
In relation to this project, previous experiments have revealed that transmission chains promote the appearance of typically linguistic features (structure, systematicity and lineage specificity, see e.g. Claidière et al, 2014).
The post-doc will have to explore this line of research further.
A major challenge will be to extend our previously used visual pattern reproduction task to sound patterns, which may lend themselves to the emergence of a combinatorial structure along the transmission chain.
We are looking for candidates who are highly motivated with a PhD in Biology or Psychology, preferably with a focus on either evolutionary mechanisms and/or language-related issues.
Candidates are also expected to have very good skills in programming and data analysis.
A previous experience with nonhuman primates would be a plus.
Candidates should contact Dr. Joël Fagot at joel.fagot@univ-amu.fr References: Claidière, N., Smith, K., Kirby, S. & Fagot, J. (2014). Cultural evolution of systematically structured behaviour in a non-human primate, Proc. R. Soc. B 2014 281, 20141541 Kirby S, Cornish H, Smith K. (2008). Cumulative cultural evolution in the laboratory: an experimental approach to the origins of structure in human language. Proc. Natl Acad. Sci. USA 105, 10 681–10 686. (doi:10.1073/pnas. 0707835105)
Supervisors : Andrea Brovelli (Institut de Neurosciences de la Timone - www.andrea-brovelli.net/), Demian Battaglia (Institut de Neurosciences des Systèmes - www.demian-battaglia.net), Frédéric Richard (Institut de Mathématiques de Marseille - www.latp.univ-mrs.fr/~richard/)
Scientific context and state-of-the-art Language is a network process arising from the complex interaction of regions of the frontal and temporal lobes connected anatomically via the dorsal and ventral pathways (Friederici and Gierhan, 2013; Fedorenko and Thompson-Schill, 2014; Chai et al., 2016).
An open question is how these brain areas coordinate to support language. Functional Connectivity (FC) analysis can provide the methodological framework to address this question.
FC analysis includes various forms of statistical dependencies between neural signals, ranging from linear correlation to more sophisticated measures quantifying directional influences between brain regions, such as Granger causality (Brovelli et al., 2004, 2015). Recently, however, it has become clear that a time-resolved analysis of FC, also known as Functional Connectivity Dynamics (FCD), can yield a novel perspective on brain networks dynamics (Hutchison et al., 2013; Allen et al., 2014). Indeed, we have shown that non-trivial resting-state FCD is expected to stem from complex dynamics in cortical networks (Hansen et al., 2015) and that the fluency of FCD correlates with single-subject level cognitive performance across the human lifespan (Battaglia et al., 2017). In task-related conditions, FCD analyses have shown that visuomotor transformations follow a schedule of recruitment of different networks over time intervals in the order of hundreds of milliseconds (Brovelli et al., 2017).
Objective of the research project These recent advances open up the possibility to tackle one of the long-term objectives of the ILCB, which is to characterise how language-related brain regions communicate.
This challenge, however, is limited by the lack of knowledge about the underlying neurophysiological mechanisms.
The objective of the Post-Doc research project is to characterise the neural correlates that could be used to track information transfer between brain regions in task-related conditions. At first, the post-doc researcher will optimise current tools for the estimate of source-level brain activity (both power and phase information of neural oscillations) from magnetoencephalographic (MEG) data using an atlas-based approach (Auzias et al., 2016). Information transfer between brain regions will be quantified by means of FC and FCD analyses based on different metrics, including multivariate spectral methods, directional influences, such as Granger causality, and information-theoretical quantities, which can track information storage, sharing and transfer (Kirst et al., 2016).
These metrics will be applied to different potential correlates of brain communication, such as power-to-power correlations, phases-to-phase relations and phase-to-amplitude couplings.
The analysis of FC and FCD representations and extraction of functional modules will then be performed using graph theory and temporal network representations (Holme and Saramäki, 2012; Brovelli et al., 2017).
To do so, we will exploit two MEG datasets. A first MEG dataset collected by Andrea Brovelli, in which participants were asked to perform finger movements in response to the presentation of numerical digits (simple visuomotor task).
And a second MEG experiment collected by Xavier Alario, in which participants were required to name objects depicted a screen (naming task). Profile of the Post-Doc candidate The Post-Doc candidate will have a PhD in cognitive and computational neuroscience, bioengineering, physics or applied mathematics.
Proficient computational skills (Matlab and/or Python) and experience in the analysis of MEG data is required. Experience in the cognitive bases of language is welcome.
Contacts Candidates should send their CV, 1 or 2 reference letters and a motivation letter to: Andrea Brovelli andrea.brovelli@univ-amu.fr Demian Battaglia demian.battaglia@univ-amu.fr Frédéric Richard frederic.richard@univ-amu.fr References Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD (2014)
Tracking whole-brain connectivity dynamics in the resting state.
Cereb Cortex 24:663–676. Auzias G, Coulon O, Brovelli A (2016) MarsAtlas: A cortical parcellation atlas for functional mapping.
Hum Brain Mapp 37:1573–1592. Battaglia D, Thomas B, Hansen ECA, Chettouf S, Daffertshofer A, McIntosh AR, Zimmermann J, Ritter P, Jirsa V (2017) Functional Connectivity Dynamics of the Resting State across the Human Adult Lifespan.
Available at: http://dx.doi.org/10.1101/107243. Brovelli A, Badier J-M, Bonini F, Bartolomei F, Coulon O, Auzias G (2017) Dynamic Reconfiguration of Visuomotor-Related Functional Connectivity Networks. J Neurosci 37:839–853. Brovelli A, Chicharro D, Badier J-M, Wang H, Jirsa V (2015) Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping. J Neurosci 35:12643–12658. Brovelli A, Ding M, Ledberg A, Chen Y, Nakamura R, Bressler SL (2004) Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by Granger causality.
Proc Natl Acad Sci U S A 101:9849–9854. Chai LR, Mattar MG, Blank IA, Fedorenko E, Bassett DS (2016) Functional Network Dynamics of the Language System. Cereb Cortex 26:4148–4159. Fedorenko E, Thompson-Schill SL (2014) Reworking the language network. Trends Cogn Sci 18:120–126. Friederici AD, Gierhan SME (2013) The language network.
Curr Opin Neurobiol 23:250–254. Holme P, Saramäki J (2012) Temporal networks. Phys Rep 519:97–125.
Hutchison RM, Womelsdorf T, Allen EA, Bandettini PA, Calhoun VD, Corbetta M, Della Penna S, Duyn JH, Glover GH, Gonzalez-Castillo J, Handwerker DA, Keilholz S, Kiviniemi V, Leopold DA, de Pasquale F, Sporns O, Walter M, Chang C (2013) Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage 80:360–378. Kirst C, Timme M, Battaglia D (2016) Dynamic information routing in complex networks. Nat Commun 7:11061.
Supervisors : Elin Runnqvist (LPL) and Magalie Ochs (LSIS)
Collaborators: Noël Nguyen (LPL), Kristof Strijkers, (LPL) & Martin Pickering (University of Edinburgh)
QT4: “Cerebral and cognitive underpinnings of conversational interactions” Traditionally, researchers have focused on either production or comprehension to investigate the underlying mechanisms of language processing.
However, in recent years a switch in focus has occurred towards the examination of both production and comprehension by looking at language processing in a conversational setting.
While this trend has started in many key fields of language processing, not all research domains have taken up this exciting and new challenge.
With the current project, we would examine how the interaction with another interlocutor might impact the processes involved in error monitoring (i.e., detection and repair of errors) during language production.
While little to no research has examined monitoring in a conversational setting, there are monitoring models that take dialogue into account (e.g., Pickering & Garrod, 2014).
In the current proposal, we would test the predictions put forward by theses models by employing several different tasks (e.g., the SLIP task, Runnqvist et al., 2016; the network description task, Declerck et al., 2016), and manipulating several variables related to the speaker, the task demands and to the different levels of linguistic representations while using both behavioral and electrophysiological methods.
Furthermore, the use of an artificial agent as a conversational partner for parts of the project will allow for manipulation of conversational variables (e.g., location or type of feedback) and it will further allow us to examine whether the patterns observed for humans would be similar, speaking to the issue of monitoring being an automatic or controlled process.
Both a virtual agent and a humanoid robot (Furhat) would be used to measure the effect of physical presence. Finally, multimodal aspects such as head nodding and smiling would be manipulated (e.g., Ochs et al., 2017).
The end goal of this project would be twofold: Concerning language processing, the objective is to have a better understanding of monitoring in conversation and its relation to monitoring in isolation. Concerning artificial intelligence, the end-goal would be to further improve our understanding of the linguistic, social and emotional factors that are essential for successful human-robot interactions.
Supervisors : Sylvain Takerkart, (INT), Hachem Kadri (LIS), François-Xavier Dupé (LIS)
In neuroimaging, traditional group analyses relie on warping the functional data recorded in different individuals on a template brain.
This template brain is constructed from the anatomy of the brain, either using standard templates (such as the ones provided in software librairies such as SPM or FSL) or using a population-specific template (which can e.g be computed using tools included in the ANTS and freesurfer packages).
Once projected on such common space, the General Linear Model (GLM) is applied to identify commonalities across subjects.
In other terms, this can be viewed as two successive averaging steps : first, the anatomical averaging that produces the template brain ; second, the functional averaging that is performed through the GLM. Because the computation of the template brain is not a linear operation, these two steps are not commutative.
The final result is therefore biased by the choice of this order, a bias which can be very important in regions where the inter-individual anatomical variability is very strong.
In particular, brain regions involved in langage processing, such as the inferior frontal gyrus, are strongly impacted by this bias.
We here propose a new framework that free ourselves from this methodological bias by performing both averaging operations simultaneously. Intuitively, this means that the anatomical averaging will exploit the functional information, and that the functional group analysis will directly feed itself from the individual brain anatomy.
We frame this problem as a multi-view machine learning question.
The tasks of the post-doctoral fellow will consists in 1. designing and implementing an algorithm that can efficiently address this question, and 2. testing it on a variety of real MRI dataset available throughout the ILCB teams.
The first task will be conducted under the supervision of Sylvain Takerkart (INT, Banco team, Neuro-Computing Center), as well as François-Xavier Dupé and Hachem Kadri (LIS, Qarma team), who have been collaborating for several years on the design of new machine learning methods for neuroimaging.
The second task will involve applying this new method on various existing fMRI datasets recorded by the ILCB teams, such as experiments dedicated to : 1. studying plasticity in the auditory cortex, with a comparison of pianists and controls using a tonotopy paradigm (D. Schön, INS ; S. Takerkart, INT), 2. understanding speaker recognition processes in the vocal brain (V. Aglieri, S. Takerkart, P. Belin, INT), 3. examining hierarchical processing in the inferior frontal gyrus (T. Chaminade, INT).
The expected benefits are an improved sensitivity of group studies, both in univariate and multivariate settings. Finally, a software tool will be released publicly so that all ILCB members, as well as the members of the scientific community at large, can benefit from this new method.
Supervisors : Benjamin Morillon (INS) , Kristof Strijkers (LPL) (potential) ILCB
Collaborators: Daniele Schon (INS), Andrea Brovelli (INT), Elin Runnqvist (LPL), Marie Montant (LPC) (potential) external collaborators: Anne-Lise Giraud (UNIGE), Sonja Kotz (UM), Friedemann Pulvermuller (FUB)
ILCB PhD & Postdoctoral Topic Proposal
Primary QT: QT3
Secondary QT: QT5
While traditional models proposed a strict separation between the activation of motor and sensory systems for the production versus perception of speech, respectively, by now most researchers agree that there is much more functional interaction between sensory and motor activation during language behavior. Despite this increasing consensus that the integration of sensorimotor knowledge plays an important role in the processing of speech and language, much less consensus exists on what that exact role may be as well as the functional mechanics that could underpin it.
Indeed, many questions from various perspectives remain open issues in the current state-of-the-art: Is the role of sensorimotor activation modality-specific in that it serves a different functionality in perception than production?
Is it only relevant for the processing of speech sounds or does it also play a role in language processing and meaning understanding in general?
Can sensory codes be used to predict motor behavior (production) and can motor codes be used to predict sensory outcomes (perception)?
And if so, how are such predictions implemented at the mechanistic level (e.g., do different oscillatory entrainment between sensory and motor systems reflect different dynamical and/or representational properties of speech and language processing)?
And in which manner can such sensorimotor integration go from arbitrary speech sounds to well-structured meaningful words and language behavior?
The goal of this project is to advance on our understanding on these open questions (in different ‘sub-topics’) taking advantage of the complementary knowledge of the supervisors, with B. Morillon being an expert on the cortical dynamics of sensorimotor activation in the perception of speech, and K. Strijkers being an expert on the cortical dynamics of sensorimotor activation in the production and perception of language.
At the center of the project, and as connecting Red Thread, is the shared interest of the supervisors in the role of ‘time’ (temporal coding) as a potential key factor that causes sensorimotor activation to bind during the processing of speech and language. Upon this view, ‘time’ transcends its classical notion as a processing vehicle (i.e., simple propagation of activation from sensory to motor systems and vice versa) and may reflect representational knowledge of speech and language.
One of the main goals of the current project is thus to test the hypothesis that temporal information between sensory and motor codes serves a key role in the production and perception of speech and language.
More specifically, we will explore whether sensorimotor integration during speech and language processing reflects: (a) the prediction of temporal information; (b) the temporal structuring of speech sounds and articulatory movement; (c) the temporal binding of phonemic and even lexical elements in language.
We will consider PhD-candidates and post-doctoral researchers to conduct research around any of the three topics specified above (a-c), and interested candidates can contact us via email (Benjamin Morillon: bnmorillon@gmail.com; Kristof Strijkers: Kristof.strijkers@gmail.com)
including a CV and motivational letter (1-2 pages).
Candidates who have a strong background in speech and language processing and/or knowledge of spatiotemporal neurophysiological techniques and analyses, will be considered as a strong plus.
Supervisors : Florence Gaunet (LPC), Thierry Legou (LPL) & Pr. Anne-Lise Giraud (Geneve Univ/ IMERA position from Feb to June 2019)
Implications: QT1 (Principale : motricity/motor representation involvement in speech perception),
QT3 (secondaire : animal as a model of the study of language)
Demande: Postdoc ou Doctoral grant Résumé : We intend to explore dog’s neural and perceptual responses to syllabic speech to understand speech auditory processing with reduced articulated production capabilities, and therefore reduced motor control. It might be the case that dogs only use for perceiving speech the acoustic cues they can themselves produce, i.e. short “syllable-like” intonated sounds. Alternatively, they might be sensitive to cues that they cannot produce at all.
Given the expertise of dogs in using human speech, the findings will provide insights concerning mechanisms on speech processing by the brain, i.e. the extent to which motor representation is involved in speech perception
Supervisors : Magalie Ochs
Human-Machine Interaction, Artificial agent, Affective computing, Social signal processing
Supervisors : Eric Castet
Efficiency of a Virtual Reality Headset to improve reading in low vision persons.
Low Vision people, in contrast to blind people, have not lost the entirety of their visual functions.
The leading cause of low vision in occidental countries is AMD (Age-related Macular Degeneration), a degenerative non-curable retinal disease occurring mostly after the age of 60 years. Recent projections estimate that the total number of AMD persons in Europe will be between 19 and 26 millions in 2040.
The most important wish of people with AMD is to improve their ability to read by using their remaining functional vision.
Capitalizing on recent technological developments in Virtual Reality Headsets we have developed a VR reading platform (implemented in the Samsung - Gear VR - headset).
This platform allows us to create a dynamic system allowing readers to use augmented vision tools specifically designed for reading (Aguilar et Castet, 2017), as well as text simplification techniques currently tested in our lab.
Our project is to assess whether this reading platform is able to improve reading performance both quantitatively (reading speed, accuracy, ...) and qualitatively (comfort, stamina, ...).
Experiments will be performed in the ophthalmology department of the University Hospital of La Timone (Marseille).