PhD grants
Supervisors: Richard Kronland-Martinet (PRISM) & Valentin Emiya (LIS) / Stéphane Ayache (LIS)
Collaborations: Bruno Torresani (I2M)
Summary
Extracting meaning from sounds is a crucial ability for humans to communicate. Although transformations of the physical vibrations into a neural activity conveyed by the peripheral auditory system are reasonably well known, non-linear transformations carried out at higher cortical levels remain remarkably poorly understood. However, understanding these transformations is crucial to significantly improve the knowledge on auditory cognition by linking the properties of sounds to their perceptual and behavioral outcomes. Simultaneously, progress in the field of machine learning now allows for the training of deep neural networks that are able to reproduce complex cognitive tasks, such as musical genre classification or words recognition (Kell et al., 2018), and even to generate realistic sounds such as those produced by a human (Van Den Oord et al., 2016). These frameworks hence provide artificial auditory systems that compete with and sometimes outmatch human abilities. However, interpreting the transformations carried out by these “black boxes” remains a crucial challenge in particular to understand which acoustic information they use to achieve these tasks. This post-doc project aims to capitalize on the unique expertise of three ILCB laboratories to address this challenge with Richard Kronland-Martinet at the PRISM lab carrying expertise is the field of auditory cognition and sound synthesis, and with Valentin Emiya and Stéphane Ayache at the LIS, respectively experts in the fields of signal processing and machine learning. In addition to this supervision, the post-doc will benefit of collaborations with other ILCB members and in particular with Bruno Torrésani at I2M, expert researcher on mathematical representations of sounds. The project will be more specifically organized in three tasks (1) training deep networks and metrics to have similar performances to that of humans (LIS) (2) interpreting these computational frameworks in the light of neuromimetic mathematical representations of sounds (LIS/PRISM) (3) evaluating the perceptual plausibility of these representations by the production of sounds (PRISM). This project, at the intersection of computational auditory cognition, machine learning, and signal processing, will set the foundation for a systematic investigation of auditory representations by developing a methodology to train networks, probe their internal representations, and evaluate their perceptual relevance. In this sense, this project will manage to leverage new transdisciplinary synergies in the ILCB through the interlocking of complementary scientific methodologies.
Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V., & McDermott, J. H. (2018). A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3), 630-644.
Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., ... & Kavukcuoglu, K. (2016). WaveNet: A generative model for raw audio. SSW, 125.
ILCB supervisors: Kristof Strijkers (Laboratoire de Parole et Langage ; LPL) & Jonathan Grainger (Laboratoire de Psychologie Cognitive ; LPC)
External collaborators : Robert Hartsuiker (Ghent University; Gent, Belgium), Cristina Baus (Universidad Pompei Fabra; Barcelona, Spain) & Guillaume Thierry (Bangor University, School of Psychology; Bangor, UK)
Project: In recent years we are experiencing a paradigm shift in the cognitive and neurobiological investigation of language processing: While traditionally these two language behaviors are studied in isolation, many researchers now underscore the importance for investigating production and perception jointly and develop multi-modal brain language models. Doing so has led already to interesting novel insights such as the observation that there seems to be much more (neuronal) overlap between the modalities than originally thought. However, two crucial elements have received less attention in multi-modal language research: (1) The focus thus far has been on high-level linguistic processes (e.g., communicative alignment in conversation, semantic and syntactic integration of message-level information). Cross-modal knowledge for more basic language operations such as word processing is lacking. (2) Most investigations comparing language production and perception are restricted to spatial information and little is known about the temporal dynamics between the two language behaviors. Addressing these open issues are important in order to develop a comprehensive brain language model across the modalities and to be able to mechanistically link the highlevel linguistic interactions between speakers and listeners with the basic-level processes on which they necessarily rely. The current project aims at filling this gap and contrast the temporal processing dynamics of word retrieval between language production and perception.
More concretely, in this project we will perform large-scale EEG analysis systematically comparing for the same stimuli and within the same participants the production vs. perception of words, and do so for different languages (French, Dutch, Spanish and English). For each language we will collect the EEG data of around 100 participants (per language) doing both language production and language perception tasks. This serves the following main objectives: (1) Be able to explore and contrast with great power a wide range of variables, ranging from sensorial (input) processes over psycholinguistic operations (semantics, syntax and phonology) toward the output behavior, across the language modalities; (2) perform crosslinguistic replication and validation, and assess potential differences in function of language typology; (3) and finally, as applied objective, use these data to develop a tool (database) for EEG research on the production and perception of language (both within and across the modalities). In this manner the current project adopts an exhaustive approach to assess the differences and similarities of the temporal dynamics underpinning the two language behaviors; information which is essential to understand the basic processing structure of word component activation in the speaker’s and listener’s mind.
The prospective PhD-student would conduct and analyze these large-scale, multi-lingual EEG experiments on word production and perception, interpret the data and their significance for cross-modal brain language models, develop the database for EEG research on language production and perception, and conceive a PhD-thesis on these project outcomes. Applicants should hold an MA degree from a relevant discipline (e.g., psychology, linguistics, cognitive science, biology), and prior knowledge on psycho-/neuro-linguistic theories and the EEG technique will be considered as a serious plus.
Contact: Kristof.strijkers@gmail.com
PhD topic proposed in co-supervision between:
Benjamin Morillon (Institut de Neurosciences des Systèmes, INS)
Robert Zatorre (McGill University, Montreal, Canada)
(potential) ILCB collaborators: Daniele Schon (INS), Andrea Brovelli (INT), Pascal Belin (INT)
(potential) external collaborators: Anne-Lise Giraud (Geneva, Switzerland), Philippe Albouy (Quebec, Canada), Luc Arnal, (Paris, France)
A major debate in cognitive neuroscience concerns whether brain asymmetry for speech and music emerges from differential sensitivity to acoustical cues or from domain-specific neural networks. This debate is closely related to the question of the origins of hemispheric specialization. Despite years of debate and empirical work, these issues have remained unresolved, and indeed have generated intense disagreement in the literature. We believe this situation is due to the insufficiently specific computational specification of prior models, and to a lack of clear grounding in neurophysiology.
This PhD project will tackle these questions by taking advantage of the spectrotemporal modulation framework, a rigorous approach that has received much support from single-neuron recordings and human imaging. According to this framework, auditory cortical neurons are best characterized functionally in terms of their responses to spectral and temporal power fluctuations (Singh and Theunissen 2003, Chi et al. 2005, Flinker et al. 2019).
In a set of inter-related studies involving human participants, the PhD candidate will investigate the respective sensitivity of the left and right hemispheres to low-level acoustical cues. The respective neural dynamics underlying auditory processing in left and right hemispheres will be characterized, and their selective role in the processing of speech and music will be highlighted. This will be done by 1- taking advantage of the spectrotemporal modulation framework, 2- capitalizing on a recently created corpus of sung speech stimuli in which melodic and verbal content is crossed and balanced, and 3- recording neural responses with intracranial and scalp recordings of human brain activity (with intracranial electroencephalography, iEEG and magnetoencephalography, MEG).
Supervisors : Maud Champagne-Lavau et Amandine Michelas (LPL)
Les théories dominantes en pragmatique conçoivent que la prise en compte des intentions, croyances et connaissance de notre interlocuteur est essentielle pour la réussite de la communication (Grice, 1975; Clark, 2016). Cependant, des travaux récents en psycholinguistique envisagent la communication comme résultant principalement d'un processus égocentré dans lequel l'information relative à l'interlocuteur et à ses besoins communicatifs ne serait prise en compte qu’en cas de besoin (situation d’incompréhension) (Barr & Keysar, 2006 pour une revue). Bard et collaborateurs (Bard et al., 2000 ; Bard & Aylett 2004) proposent quant à eux l'existence d'un modèle dans lequel deux types de processus cognitifs seraient mis en jeu lors de la communication: l’un égocentré, automatique et rapide, sans coût cognitif (mettant en jeu un amorçage), l’autre nécessitant des inférences, lent et plus coûteux (mettant en jeu la construction du modèle mental de l'interlocuteur).
L'objectif de ce projet de thèse sera d'étudier comment la prise en compte de l'interlocuteur est susceptible d’affecter les variations prosodiques produites par le locuteur. Il s'agira également de déterminer quelles sont les contraintes qui conduisent à prendre en compte ou pas notre interlocuteur en situation de conversation.
Le candidat devra avoir une formation en psychologie ou en sciences du langage
PhD topic proposed in co-supervision between:
Philippe Blache <philippe.blache@univ-amu.fr> or Magalie Ochs <magalie.ochs@lis-lab.fr >
LIS, LPL, ILCB, Aix-Marseille Université & CNRS
Deadline for application: June 3rd, 2019
The goal of this project is to develop a computational model of social skills for multimodal interaction systems. The model will focus more precisely on a specific context, task oriented dialogues, in which all semantic and pragmatic aspects are controlled. It will be built by means on different machine learning methodologies applied to the analysis of natural corpora. This work will be part of an ongoing project (http://www.lpl-aix.fr/~acorformed/), aiming at developing a virtual reality system for training doctors to break bad news, with an embodied conversational agent playing the role of a virtual patient.
The PhD is organized around 4 main tasks:
- Describing the social skills relevant to the use case, with a specific focus on empathy and persuasion, through a corpus analysis. This task is done in collaboration with doctors involved in such training activities.
- Modeling the social skills, by means of machine learning techniques
- Implementing the social skills in the communication environment
- Developing a tool for the automatic evaluation of doctor's skills, adapted to the training goals
The PhD candidate should have a master's degree completed in one of the fields below:
- Computer science
- Artificial Intelligence
- Natural language processing
- Applied mathematics
The candidate should have a strong background in machine learning and modeling methods. Some complementary previous experience would be appreciated in the following topics:
- Human-computer interaction
- Multimodal data processing
- Data acquisition
- Corpus-based studies
- Affective computing
- Conversational agents
This fellowship is part of the ILCB action (https://www.ilcb.fr/call-for-applications/phd-grants/). It is a three-year work contract, with a monthly net salary of approximately 1685€/month, and a financial support for international research training and conferences participations plus a contribution to the research costs.
French language is not required.
Aix Marseille University (http://www.univ-amu.fr/en), the largest French University, is ideally located on the Mediterranean coast, and only 1h30 away from the Alps.
The application files consists of the following documents:
- A detailed curriculum,
- A description of the academic background and copy of academic records and most recent diploma,
- A cover letter describing why the applicant wishes to participate in this project and his/her research’s adequacy with the proposed topics
- if possible, recommendation letters (including one from the master or equivalent diploma supervisor)
For any question, contact Philippe Blache <philippe.blache@univ-amu.fr> or Magalie Ochs <magalie.ochs@lis-lab.fr >
ILCB partners involved in the project:
Arnaud Rey (1)
Thierry Legou (2)
Corinne Fredouille (3)
Jean-François Bonastre (3)
Gilles Pouchoulin (2)
Pierre Pudlo (4)
Jean-Marc Freyermuth (4)
(1) Laboratoire de Psychologie Cognitive
(2) Laboratoire Parole et Langage
(3) Laboratoire Informatique d’Avignon
(4) Institut de Mathématiques de Marseille
Project summary:
Recent technological developments in data recording and storage have led to an unprecedented increase in research on early language development in young children. Combined with new generations of automatic data processing models, such as deep learning, new capabilities for analyzing children's speech production open the way to an unparalleled understanding of the early phases of human language development.
The present project is in line with these approaches and aims more particularly to study the spontaneous vocal productions of children between 0 and 12 months of age when the child is alone in her or his place of sleep. The general principle is to place an audiovisual system on the child's bedside for three consecutive days, once a month for twelve months. We will thus record the child's verbal (through sound recordings) and motor (through visual recordings) activity before and after each sleep period. The objective is to characterize pre-verbal and verbal productions for an initial sample of about twenty children using unsupervised classification methods and trying to trace the evolution of these productions over time. These productions will also be matched with the child's motor activity, with different parameters of the family environment as well as with her or his verbal skills at 2 years of age.
For more details, please contact arnaud.rey@univ-amu.fr.
PhD cosupervisors: Nicolas Claidière (laboratoire de Psychologie cognitive) and Noël Nguyen (laboratoire Parole et langage), with the participation of Leonardo Lancia (laboratoire de phonétique et phonologie, Univ Paris 3 & CNRS)
Abstract
In spoken language interactions, and for people to understand each other, speech sounds must be categorized consistently across listeners. Within a linguistic community, a common set of criteria must therefore be agreed upon, as regards how phonemic categories are delineated in the speech sound space. In spite of their central importance for social cognition and speech sciences, little attention has been devoted so far to the mechanisms that allow this shared perceptual landscape to emerge. The goal of this project will be to explore these mechanisms as they deploy within a group of participants, in an experimental framework.
The PhD student will contribute to devising an ensemble of innovative, joint-perception experiments, in which each listener's perceptual behavior can be affected by that of the other listeners. This will consist, for example, in having groups of listeners construct a mapping between unfamiliar speech sounds and sets of entities (e.g., visual shapes) in a coordinated way, within an experimental set-up that will make it possible for information to flow between listeners. Issues of interest will include the impact of local, pairwise interactions on the dynamics of the entire group, the geometry of the shared speech sound space and how it evolves over the course of the interactions between listeners, the potential benefit of performing a speech perception task collectively rather than individually.
To a very large extent, these issues remain unexplored in the speech perception domain. However, fruitful connections can be made with related, albeit different lines of work, which are concerned with cultural transmission in both humans and non-human species (Claidière, Smith, Kirby & Fagot, 2014) and with experimental approaches to the emergence and evolution of language (e.g. Kirby et al., 2008, Xu et al, 2013). In this project, a bridge will be established between these different domains, which will allow the PhD candidate to exploit the experimental methods and mathematical tools developed in studies on cultural transmission, to further our understanding of how speech perception works, and how it contributes to the evolution of phonological systems.
For more information:
Nicolas Claidière’s website: http://www.nicolas.claidiere.fr/
Noël Nguyen’s webpage: https://cv.archives-ouvertes.fr/noel-nguyen
Leonardo Lancia’s list of publications: http://lpp.in2p3.fr/Publications-776
Advisors:
B. Torrésani, Institut de Mathématiques de Marseille
C. Bénar, Institut de Neurosciences des Systèmes
Collaborations.
Agnès Trébuchon, AP-HM, Marseille
Jean-Marc Lina, Centre de Recherches Mathématiques, Montréal, Canada
Abstract
Current techniques for extracting spatio-temporal networks in MEG and EEG suffer from the inherent difficulties arising from solvin the inverse problem (i.e. projecting the data from surface sensors to brain sources). We propose here to use a novel wavelet analysis approach in order to improve the extraction of language networks from MEG signals. The methods will be validated using simultaneous MEG-intracerebral EEG recordings.
Rationale
Brain function involves complex interactions between cortical areas at different spatial and temporal scales. Thus, the spatio-temporal definition of brain networks is one of the main current challenges in neuroscience. With this objective in view, electrophysiological techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) offer a fine temporal resolution that allows capturing fast changes (at the level of the millisecond) across a wide range of frequencies (up to 100 Hz).
However, the spatial aspects require solving a difficult (extremely ill-posed) inverse problem that projects the signals recorded at the level of surface sensors to the cortex. Most existing methods suffer from several drawbacks, two of which will be addressed in this project:
• data are processed at each time sample independently, disregarding time correlations. This is not optimal in terms of robustness to noise - a key issue in such ill-posed inverse problems, where noise sensitivity is extremely high. In addition, not accounting for time correlations at the sensor level is extremely penalizing if one aims at estimating spatio-temporal networks at the source level.
• Current methods suffer severely from 'leakage', i.e. the activity in a given region 'spills' onto neighbouring regions. This is mainly a consequence of the ill-posedness of the inverse problem, which impose regularizations that tend to oversmooth the solution. Reducing leakage can be obtained by using sparsity-enforcing spatial regularizations, however defining such regularizations for signals supported on the cortical surface requires adequate representations for such signals
Recent advances in computer capacities, algorithm design and computational statistics allow now handling together space, time and frequency aspects of the brain signals into a combined simultaneous approach, instead of applying them in consecutive steps. The use of wavelet representations in the time domain has been shown to yield very significant dimension reduction, wavelet representations of functions supported on surfaces are expected to allow similar reductions in the spatial domain. The use of (large) spatio -temporal covariance matrix enables taking advantage of temporal correlation, provided the “curse of dimensionality” can be correctly controlled. Multivariate tensorial techniques can now extend classical principal component analysis and handle data with many dimensions (time, space, frequency, trials conditions, subjects). Taken together, this has the potential to improve considerably the signal to noise ratio, and also to handle the source leakage issues by capturing all leaked (zero-lag) activity originating from one region into a single component, thus providing a much finer spatio-temporal resolution.
Objectives
The objective of this PhD project is to develop algorithms and data analysis procedures for spatio-temporal characterization of brain networks across multiple frequencies, for EEG and MEG signals, validate them on simulated and real signals, and apply the developed methodology on language protocols in the freamework of ILCB
In terms of algorithms and data analysis procedures, two ways will be investigated.
• On the one hand, the Bayesian combined space-time inverse problems approaches (the KwMEM algorithm) currently developed at I2M (Roubaud et al 2018) will be extended. The latter exploit sophisticated dimension reduction (in time and space), matrix factorization and optimization techniques to control the curse of dimensionality and process directly space-time measurements. A main extension will involve the use of cortical wavelets , i.e. spatial domain wavelets (Özkaya 2013, Özkaya & Vandeville 2011) to describe spatial variations of activity on the cortical suface. The use of time domain wavelet frames (which are translation invariant) instead of bases and the investigation of several (space-time) prior distributions for cortical sources (in addition to the currently used gaussian mixture priors) will be investigated. Also, sparse multivariate techniques will be applied to the estimated sources to infer space-time graphs for modeling brain networks. Again, given the size of these data, the curse of dimensionality will have to be handled appropriately, which should be permitted thanks to the cortical wavelet expansions.
• Besides, multivariate analysis techniques will be considered as alternatives to the inverse problem resolution. It has been shown that these provide simple tools for source localization and separation at the sensor level, that can be exploited further for localization. Modern multivariate approaches developed at I2M in another context (i.e. NMR and/or fluorescence spectroscopy), namely sparse tensor factorizations, are expected to provide simple approaches that could handle higher dimensional data (such as time-frequency-space, or time-frequency-space-trial).
• The developed approaches will be compared with classical methods (beamformer, minimum norm estimates), first on simulated data and then in real data obtained in language tasks.
In particular, we will use simultaneous EEG-MEG-intracerebral data obtained at the INS in patient during presurgical evaluation of epilepsy (Figure 1). These data will provide an intracerebral "ground truth" to which non-invasive results can be compared.
Context
This project will be a collaboration between the Institut de Mathématiques de Marseille (I2M; B Torrésani) and the Institut de Neuroscience des Systèmes (INS, C Bénar, JM Badier, A Trébuchon). The I2M Signal-Image team is specialized in the design of state-of-the art signal processing algorithms, involving sparsity constraints and wavelet/time-frequency analysis, together with computational statistics. The INS has extensive experience in the recording and analysis of brain signals, including trimodal EEG-MEG-intracerebral acquisitions.
Figure 1: Multivariate and multimodal graph characterization in an auditory language taks. A non linear correlation (h2) graph was computed between the SEEG signal (H' electrode, in the auditory cortex) and MEG signals (sources obtained from independent component analysis), in response to an auditory language protocol ("Ba" and "Pa" sounds,(Trebuchon-Da Fonseca et al. , 2005)) (figure credit S. Medina, Dynamap team INS). The ICA permits to render the problem sparse (by reducing data dimension). It is based though on a constraint of independence that is not fully justified; the methods proposed in this project will help introducing physiologically-relevant sparsity constraints based on a multiscale (wavelet) approach.
References
Badier J M, Dubarry A S, Gavaret M, Chen S, Trebuchon A S, Marquis P, Regis J, Bartolomei F, Benar C G and Carron R 2017 Technical solutions for simultaneous MEG and SEEG recordings: towards routine clinical use Physiol Meas 38 N118-N27
Cong F, Lin Q H, Kuang L D, Gong X F, Astikainen P and Ristaniemi T 2015 Tensor decomposition of EEG signals: a brief review J Neurosci Methods 248 59-69
Dubarry A S, Badier J M, Trebuchon-Da Fonseca A, Gavaret M, Carron R, Bartolomei F, Liegeois-Chauvel C, Regis J, Chauvel P, Alario F X and Benar C G 2014 Simultaneous recording of MEG, EEG and intracerebral EEG during visual stimulation: from feasibility to single-trial analysis Neuroimage 99 548-58
Lina JM, Chowdhury R, Lemay E, Kobayashi E and Grova C 2014 Wavelet-based localization of oscillatory sources from magnetoencephalography, IEEE Trans Biomed Eng 61 2350-64
Özkaya SG 2013 Randomized Wavelets on Arbitrary Domains and Applications to Functional MRI Analysis, PhD Thesis, Princeton University, Program in Applied and COmputational Mathematics
Özkaya SG and Van De Ville D 2011 Anatomically adapted wavelets for integrated statistical analysis of fMRI data, 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro
Palva J M, Wang S H, Palva S, Zhigalov A, Monto S, Brookes M J, Schoffelen J M and Jerbi K 2018 Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures Neuroimage 173 632-643
Roubaud MC, Carrier J, Lina JM and Torrésani B 2018 Space-time extension of the MEM approach for electromagnetic neuroimaging, IEEE conference on Machine Learning and Signal Processing (MLSP 2018)
Toumi I, Torresani B and Caldarelli S 2013 Effective Processing of Pulse Field Gradient NMR of Mixtures by Blind Source Separation Anal Chem 85 11344-51
Vu XT, Chaux C, Thirion-Moreau N, Maire S and Carstea EM 2017 Journal of Chemometrics, 31 4
Supervisors: Sophie Dufour & Amandine Michelas
In contrast to languages such as Spanish, in French, the position of accent within a word does not change its meaning (i.e. /'bebe/ “s/he drinks” vs. /be'be/ “baby” in Spanish, whereas in French the two forms mean the same word, ‘baby’). In French, the main accent, called primary accent, affects the last syllable of a larger unit than the word, that is, the accentual phrase. For instance, the monosyllabic word chat “cat” receives primary accent in the following sentence un petit 'chat “the little cat” because it is the last full syllable of the accentual phrase. In contrast, it is unaccented in the sentence un chat 'triste “the sad cat” because it is not in final position within the accentual phrase. To this date, there are numerous demonstrations that in French, accent is used in syntactic parsing and in the segmentation of continuous speech into words (Christophe et al., 2004; Spinelli et al., 2010). However, the role of accent in spoken word recognition is still poorly documented. Since French speakers are inevitably exposed to both the accented and unaccented versions of words, models assuming the storage of multiple variants (Connine, 2004; Goldinger, 1998) predict that accent in French could be represented in the mental lexicon. In this PhD project, using both EEG and behavioral experiments, we will examine how accent is represented in French and how it affects spoken word recognition.
PhD candidates will be expected to have a background in psycholinguistics and/or phonetics and to demonstrate an interest in word recognition and prosody.
Christophe, A., Peperkamp, S., Pallier, C., Block, E., & Mehler, J. (2004). Phonological phrase boundaries constrain lexical access I. Adult data, Journal of Memory and Language, 51, 523-547. Connine, C. M. (2004). It’s not what you hear, but how often you hear it: On the neglected role of phonological variant frequency in auditory word recognition. Psychonomic Bulletin & Review, 11, 1084–1089. Goldinger, S. D. (1998). Echoes of echoes? An episodic theory of lexical access. Psychological Review, 105, 251–279. Spinelli, E., Grimault, N., Meunier, F., & Welby, P. (2010). An intonational cue to word segmentation in phonemically identical sequences, Attention, Perception, & Psychophysics, 72, 775-787.
Supervisors: Joël Fagot, Pascal Belin
Behavioural Studies of Voice Perception in Baboons –
PhD topic proposed in co-supervision between
Joël Fagot (Laboratoire de Psychologie Cognitive, https://lpc.univ-amu.fr/fr/profile/fagot-joel ) and
Pascal Belin (Institut de Neurosciences de La Timone, https://neuralbasesofcommunication.eu/people/pascal-belin/ )
Speech perception—the ability to extract and process linguistic information in voice—may be unique to humans, but other voice perception abilities are widely shared in the animal kingdom.
This PhD project will adopt a comparative approach to investigate the perceptual mechanisms of voice perception in baboons and compare them with ours.
Behavioural testing will be performed on the CCDP platform of Rousset’s primatology station, where a group of semi-free baboons interact ad-lib with automated testing systems.
Two basic building blocks of voice perception abilities, ecologically relevant for both humans and baboons, will be focused on:
(1) the detection of conspecific vocalizations (CV) amongst other sounds; and (2) the discrimination of different speakers.
It is anticipated that the results of this investigation will bring crucial new knowledge relevant to the evolution of voice perception abilities in primates.
Supervisors : M. Bonnard (INS), C. Pattamadilok (LPL)
What does dorsal premotor cortex do during reading and writing? Reading and writing activities are closely related.
Several brain imaging studies provided data that suggest a close relationship between these two activities.
For instance, activation of the left dorsal premotor cortex -dPM- (known as Exner’s area: Exner, 1881) which is a key area in writing (Planton et al., 2013) was also reported during visual words and letters processing (Longcamp et al., 2003, 2011, Nakamura et al. 2012).
Our recent study (Pattamadilok et al., 2016) where Transcranial Magnetic Stimulation (TMS) was used to interrupt the function of the left dPM during a visual lexical decision task showed that this area contributes to fluent reading and, therefore, has a functional role in this activity. However, the nature of its contribution remains unclear.
According to the “motor hypothesis”, the left dPM might have a motor function, i.e., learning to read and write strengthens the connectivity between visual and motor systems such that the presence of visual words/letters would automatically activate the associated gestures.
This implicit evocation of writing motor processes would, in turn, reinforce the recognition of written stimuli.
This view is nevertheless questioned by the observations that the left dPM was also activated during keyboard typing, that is, when handwriting gestures were not explicitly or implicitly required (Purcell et al., 2011). These observations have led to an alternative hypothesis that the area may play a more central role in language processing.
According to this “cognitive hypothesis”, the contribution of this area to reading would be due to shared cognitive components between writing and reading, more specifically, the sublexical and serial processes. The main goal of the thesis is to investigate the properties of the left dPM during reading and writing.
More specifically, three issues will be addressed using three applications of stereotaxic TMS: 1) The functional role of this area, with a particular attention to the involvement of the left dPM in motor vs. cognitive aspect of these activities (using –TMS interruptive protocol-), 2)
The properties of neuronal populations in the left dPM based on the hypotheses that the area may contain a homogeneous population of neurons that have both cognitive and motor functions or that it may contain two functionally segregated subpopulations (using TMS adaptation paradigm, Pattamadilok et al., submitted) 3)
The functional connectivity of this area with other brain regions (motor and visual cortices -primary, supplementary-, and other regions within the language network (using combined TMS-EEG). References Exner S. 1881. Untersuchungen uber die Localisation der Functionen in der Grosshirnrinde des. Menschen: Wilhelm Braumuller. Longcamp M, Anton JL, Roth M, Velay JL (2003): Visual presentation of single letters activates a premotor area involved in writing. Neuroimage 19:1492–1500. Nakamura K, Kuo WJ, Pegado F, Cohen L, Tzeng OJL, Dehaene S (2012): Universal brain systems for recognizing word shapes and handwriting gestures during reading. Proc Natl Acad Sci USA 109:20762–20767. Pattamadilok, C., Planton, S., & Bonnard, M. (submitted). Phonology-coding neurons in the ‘Visual Word Form Area’: Evidence from TMS adaptation paradigm.Pattamadilok, C., Ponz, A., Planton, S., & Bonnard, M. (2016). Contribution of writing to reading: Dissociation between cognitive and motor process in the left dorsal premotor cortex. Human Brain Mapping, 37, 1531–1543. Planton S, Jucla M, Roux F-E, D emonet J-F (2013): The “handwriting brain”: a meta-analysis of neuroimaging studies of motor versus orthographic processes. Cortex 49:2772–2787. Purcell JJ, Napoliello EM, Eden GF (2011): A combined fMRI study of typed spelling and reading. Neuroimage 55:750–762.
Supervisors : Pascale Colé
Le fonctionnement cognitif de l’adulte dyslexique/ Cognitive functioning in adults with dyslexia Pascale Colé (Laboratoire de Psychologie Cognitive, UMR 7290) & Christine Assaiante (Laboratoire de Neurosciences Cognitives, UMR 7291)
La lecture de l’adulte dyslexique pose un véritable défi scientifique car malgré des déficits importants dans les bas niveaux de la lecture (décodage), un certain nombre d’entre eux parviennent à poursuivre des études supérieures.
Bien que certains troubles manifestés pendant l’enfance persistent chez l’adulte dyslexique, les rares études conduites chez ces sujets suggèrent l’émergence d’un profil cognitif particulier qui serait, en partie, le résultat de compensations cognitives développées naturellement ou par la rééducation. Les recherches proposées s’intéressent en particulier au rôle du contrôle cognitif et à la mémoire sémantique dans le système de compensations cognitives mis en place et utilisent des techniques très variées : EEG, mouvements oculaires et indices comportements classiques (temps de réaction).
Elles s’intéressent également aux liens qu’entretiennent les représentations du langage et de la sensorimotricité dans l’explication des troubles phonologiques des dyslexiques.
Reading in adults with dyslexia adult poses a real scientific challenge because despite significant deficits in the low levels of reading (decoding), some of them manage to pursue higher education.
Although some of the deficits showed during childhood persist in dyslexic adults, the rare studies conducted in these participants suggest a particular cognitive profile that is, in part, the result of cognitive compensations developed naturally or through rehabilitation. The proposed research focuses in particular on the role of cognitive control and semantic memory in the cognitive compensation system put in place and uses a wide variety of techniques: EEG, eye movements and classical behavior indices (reaction time).
They are also interested in the links between representations of language and sensorimotricity in explaining the phonological deficits in adults with dyslexia.
Supervisors : Noël Nguyen, Elin Runnqvist
Adaptive prediction in the joint production of speech Supervisor: Noël Nguyen (LPL) ILCB collaborators: Elin Runnqvist (LPL), Kristof Strijkers (LPL), Mireille Besson (LNC) External collaborators: Alessandro D’Ausilio (IIT / University of Ferrara, Italy), Cristina Baus (UPF, Barcelona) ILCB
Transversal Question #4 (Cerebral and cognitive underpinnings of conversational interactions) In conversational interactions, the mechanisms employed by speakers to predict what their interlocutors will say next, are an essential feature.
Predictive mechanims can account for the fact that turn-taking between conversational partners is performed both smoothly and rapidly.
They are also assumed to contribute to making it easier for each partner to process and understand the other partner’s utterances.
In current influential theoretical frameworks (e.g. Pickering & Garrod, 2013), they rely on a close perception-production link, as it is assumed that prediction of what the other is about to say is based on the speaker’s own spoken language production system.
The goal of this project will be to further explore the brain and cognitive underpinnings of prediction in conversational interactions.
We will use a joint-action experimental paradigm, in which participants will perform a speech production task in conjunction with another human partner or a robot. Recent EEG studies (e.g. Baus et al, 2014) on joint speech production in dyads of human participants have provided evidence that participants predict their partner’s upcoming word using processes that they also use in producing words themselves.
The question at the heart of the present project will be: to what extent is prediction adaptive, ie fine-tuned to the partner’s individual speech production characteristics? If tuning of prediction to the partner’s idiosyncratic speech behavior does take place, over which time scale does it arise, and which speech properties does it focus on?
EEG will be used to explore EEG components (such as the N100, P200, and N400) associated with prediction processes in speaking with a human partner. Building upon previous work on the somatotopic activation of the motor cortex in both speech production and perception (e.g., D'Ausilio et al, 2009, 2014), we will also employ MEG combined with MRI for source localization (Strijkers et al, 2017) to determine to what extent participants predict the articulatory make-up of their partner’s upcoming word.
A robot (Furhat) will be used as the participant's partner in some experiments, with a view to accurately manipulating both the timing and sound shape of the robot's utterances.
The project will contribute to a better understanding of the brain mechanisms that allow us to anticipate our partner's upcoming utterance in conversational interactions.
PhD candidates will be expected to have a solid background in neurolinguistics. Experience with EEG and/or MEG, and with speech processing techniques, will be appreciated. The candidates will be prepared to make stays at both the IIT / Ferrara and the UPF / Barcelona in the course of the PhD.
Supervisors : Elin Runnqvist, Sonja Kotz
Why the basal ganglia, cerebellum and medial frontal cortex are critical for the learning of speech sequences
• PhD-project proposal supervised by Elin Runnqvist (LPL) and Sonja Kotz (University of Maastricht)
• Collaborators: Andrea Brovelli (INT)
• QT5: “Temporal Networks” (but also related to QT3 “Language & motor control”)
The ability to interpret and produce structured sound/motor sequences is at the core of human language.
This aspect of language learning most frequently involves auditory input leading to articulatory output (i.e., speech perception used to learn speech production).
The basal ganglia (BG), cerebellum (CB), and medial frontal cortex (MFC) are important neural pillars of reward-based, error-based, and unsupervised learning respectively.
The main aim of the current research is to shed light on the involvement of each type of learning as well as their potential interactions in the acquisition of novel speech sequences. An interesting and open question is to what extent the successful acquisition of novel speech motor sequences engages all learning mechanisms and systems.
Furthermore, a growing body of evidence concerning the reciprocal structural and functional connectivity between BG and CB as well as between these subcortical structures and the MFC raises the question as to what extent the three learning mechanisms work independently or in concert (e.g., Hoshi et al., 2005; Akkal et al., 2007; Bostan et al, 2010; 2013).
For example, it was proposed that specific behaviors or functions can be realized by a combination of multiple learning modules (Doya, 2000). Some authors also argued that such cooperation between BG, CB, and cortex could be benefitial for solving the so-called “credit assignment” problem in learning (Minsky, 1963), that is, getting the right information to the right place and at the right time for it to be effective in guiding the learning process (e.g., Houk and Wise, 1995).
Others have argued that BG and CB may be involved to different extents during different stages of learning (e.g., Doyon et al., 2003). Concretely, in the case of motor sequence learning, the contribution of CB would precede that of BG such that with extended practice CB would no longer be essential and long-lasting retention of a skill should involve representational changes in BG and its associated structures in cortex. In this project, we aim at shedding light on the implication(s) of the three learning mechanisms in speech motor sequence learning as indexed by the involvement of BG, CB, and MFC, putting special emphasis on the functional and dynamical interactions of these brain areas. A multi-methods approach using both fMRI and MEG while participants engage in a shadowing task (i.e., auditory+visual perception followed by overt production) will allow gathering information about the involvement of these regions as well as their structural (diffusion tensor imaging) and functional connectivity. Within the shadowing task we will (a) manipulate reward by providing feedback on accuracy (i.e., well done!), maximizing the possibility of relying on reward based learning and (b) manipulate sensorimotor predictability through the level of noise in the auditory feedback of participants’ own speech, modulating the extent to which it is possible to rely on error-based learning. Participants will be tested behaviorally over several training sessions, and we will manipulate the quantity of training across two conditions so as to have an index of early and late learning stages during a final testing session with fMRI or MEG. Time-series analyses will also be conducted of both fMRI and MEG data in order to examine learning as a continuum. The results will advance our knowledge on the human ability to acquire and produce speech sequences and clarify how two of the most important learning and monitoring systems in the human brain (basal ganglia and cerebellum) might be functionally interconnected and work in concert with the cerebral cortex to sustain learning in cognition.
Supervisors : Bruno Torrésani, Christian Bénar
Multidimensional characterization of brain networks in language tasks
B. Torrésani, HDR, Institut de Mathématiques de Marseille C. Bénar, HDR, Institut de Neurosciences des Systèmes
Related transverse question: "Temporal networks" Rationale Brain function involves complex interactions betwen cortical areas at different spatial and temporal scales.
Thus, the spatio-temporal definition of brain networks is one of the main current challenges in neuroscience.
With this objective in view, electrophysiological techniques such as eletroencephalography (EEG) and magnetoencephalography (MEG) offer a fine temporal resolution that allows capturing fast changes (at the level of the millisecond) across a wide range of frequencies (up to 100 Hz). However, the spatial aspects require solving a difficult (extremely ill-posed) inverse problem that projects the signals recorded at the level of surface sensors to the cortex. So far, most existing methods process data at each time sample separately.
This is not optimal in terms of robustness to noise - a key issue in the ill-posed inverse problem which is very sensitive to noise. Moreover, current methods suffer severely from 'leakage', i.e. the activity in a given region 'spills' onto neighbouring regions because of the blurry aspect of typical inverse problem algorithms (Palva et al., 2018).
Recent advances in computer capacities, algorithm design and computational statistics allow now handling together space, time and frequency aspects of the brain signals into a combined simultaneous approach, instead of applying them in consecutive steps.
The use of (large) spatio -temporal covariance matrix enables taking advantage of temporal correlation, provided the curse of dimensionality can be correctly controlled. Multivariate tensorial techniques can now extend classical principal component analysis and handle data with many dimensions (time, space, frequency, trials conditions, subjects) (review in (Cong et al., 2015)). Taken together, this has the potential to prove considerably the signal to noise ratio, and also to handle the source leakage issues by capturing all leaked (zero-lag) activity originating from one region into a single component, thus providing a much finer spatio-temporal resolution.
Objectives The objective of this PhD project is to develop algorithms and data analysis procedures for spatio-temporal characterization of brain networks across multiple frequencies, for EEG and MEG signals. In terms of algorithms and data analysis procedures, two ways will be investigated.
On the one hand, the Bayesian combined space-time inverse problems approaches currently developed at I2M will be extended. The latter exploit sophisticated dimension reduction (in time and space), matrix factorization and optimization techniques to control the curse of dimensionality and process directly space-time measurements.
Extensions of these techniques will involve the use of wavelet frames (which are translation invariant) instead of bases and the investigation of several (space-time) prior distributions for cortical sources (in addition to the currently used Gaussian mixture priors).
Also, sparse multivariate techniques will be applied to the estimated sources to infer space-time graphs for modeling brain networks. Again, given the size of these data, the curse of dimensionality will have to be handled appropriately.
Besides, multivariate analysis techniques will be considered as alternatives to the inverse problem resolution. It has been shown that these provide simple tools for source localization and separation at the sensor level, that can be exploited further for localization.
Modern multivariate approaches developed at I2M in another context (i.e. NMR and/or fluorescence spectroscopy (Toumi et al., 2013)), namely sparse tensor factorizations (Vu et al 2017), are expected to provide simple approaches that could handle higher dimensional data (such as time-frequency-space, or time-frequency-space-trial).
The developed approaches will be compared with classical methods (beamformer, minimum norm estimates), first on simulated data and then in real data obtained in language tasks.
In particular, we will use simultaneous EEG-MEG-intracerebral data obtained at the INS in patient during presurgical evaluation of epilepsy (Figure 1).
These data will provide an intracererebral "ground truth" to which non-invasive results can be compared (Dubarry et al., 2014; Badier et al., 2017).
We will use data from language protocols that either have been already acquired (Ba/Pa, collaboration with JM Badier), or will be acquired in the next months in the framework of the 'Scales' Project (FLAG-ERA, PI Bénar).
Context This project will be a collaboration between the Institut de Mathématiques de Marseille (I2M; B Torrésani) and the Institut de Neuroscience des Systèmes (INS, C Bénar, JM Badier, A Trébuchon).
The I2M Signal-Image team is specialized in the design of state-of-the art signal processing algorithms, involving sparsity constraints and time-frequency analysis, together with computational statistics.
The INS has extensive experience in the recording and analysis of brain signals, including trimodal EEG-MEG-intracerebral acquisitions. Figure 1: Multivariate and multimodal graph characterization in an auditory language taks.
A non linear correlation (h2) graph was computed between the SEEG signal (H' electrode, in the auditory cortex) and MEG signals recorded simultaneously (sources obtained from independent component analysis), in response to an auditory language protocol ("Ba" and "Pa" sounds,(Trebuchon-Da Fonseca et al. , 2005)).
References Badier J M, Dubarry A S, Gavaret M, Chen S, Trebuchon A S, Marquis P, Regis J, Bartolomei F, Benar C G and Carron R 2017 Technical solutions for simultaneous MEG and SEEG recordings: towards routine clinical use Physiol Meas 38 N118-N27 Cong F, Lin Q H, Kuang L D, Gong X F, Astikainen P and Ristaniemi T 2015 Tensor decomposition of EEG signals: a brief review J Neurosci Methods 248 59-69 Dubarry A S, Badier J M, Trebuchon-Da Fonseca A, Gavaret M, Carron R, Bartolomei F, Liegeois-Chauvel C, Regis J, Chauvel P, Alario F X and Benar C G 2014 Simultaneous recording of MEG, EEG and intracerebral EEG during visual stimulation: from feasibility to single-trial analysis Neuroimage 99 548-58 Palva J M, Wang S H, Palva S, Zhigalov A, Monto S, Brookes M J, Schoffelen J M and Jerbi K 2018 Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures Neuroimage Toumi I, Torresani B and Caldarelli S 2013 Effective Processing of Pulse Field Gradient NMR of Mixtures by Blind Source Separation Anal Chem 85 11344-51 Vu XT, Chaux C, Thirion-Moreau N, Maire S, Carstea EM Journal of Chemometrics, 31, 4, Apr. 2017
Supervisors : Alexis Nasr, Françoise Vitu
Ce sujet de thèse s'inscrit dans une double problématique de traitement automatique des langues par l'ordinateur (TAL) et d'oculométrie.
Il sera effectué sous la double direction de Françoise Vitu du Laboratoire de Psychologie Cognitive et d'Alexis Nasr du Laboratoire Informatique et Systèmes.
Les modèles de TAL permettent de réaliser des tâches d'analyse linguistique des énoncés, tel que l'analyse morphologique, syntaxique, sémantique ou discursive.
Ces modèles permettent de prédire des représentations abstraites (syntaxiques, sémantique ...) à partir des observables que sont le texte ou le signal de parole. L'analyse du comportement de ces outils permet d'identifier des zones d'incertitudes : celles où le traitement peut se poursuivre dans différentes directions.
Elles correspondent, en général, à des ambiguïtés et c'est souvent à ces moments que des erreurs sont commises par l'ordinateur.
D'autre part, les données issues de l'oculométrie, constituent une trace du mouvement des yeux lors de la lecture.
Ces données révèlent que l’énoncé n'est pas traité strictement séquentiellement, c'est-à-dire d'un mot au mot suivant.
Une partie de ces mouvements est sensible aux influences linguistiques.
Ils se sont révélés comme des indices d’ambiguité syntaxique ou sémantique.
Il s'agit du temps total de fixation sur un mot, de la probabilité que ce mot soit à plus long terme (c’est-à-dire une fois que les yeux auront avancé un peu plus loin dans la phrase)
l’objet d’un second passage ou relecture et le temps total de visualisation du mot (la somme de toutes les durées des fixations sur le mot au travers de tous les passages).
Le sujet de la thèse est à l'intersection de ces deux types d'observations.
Il vise à étudier comment les données issues de l'oculométrie pourraient être intégrées dans des modèles de TAL et, inversement, comment les modèles de TAL pourraient aider à la compréhension, voire contribuer à la modélisation/prédiction, du comportement oculaire pendant la lecture. Le travail reposera sur trois acquis :
- Le modèle MASC (Model of attention in the Superior Colliculus), développé par F. vitu (LPC) en collaboration avec H. Adeli & G. Zelinsky (NY, USA), permettant de déterminer la part du comportement oculomoteur qui est purement le reflet de mécanismes visuo-moteurs (non linguistiques).
- Les données d'oculométrie enregistrées dans le cadre du projet "BLRI book reading Corpus".
- Le logiciel MACAON, développé au laboratoire d'Informatique et Systèmes permettant de réaliser divers traitements linguistiques.
Ce sujet de thèse s'inscrit dans la question tranversale 5 (Deep Learning) dans la mesure où les prédictions réalisées par les modèles de TAL reposent sur des réseaux de neurones profonds.
Supervisors : Marie Montant, Christine Deruelle
The role of emotions in the perception of abstract words: An embodied perspective
Co-direction: Marie Montant1 & Christine Deruelle2 1 Laboratoire de Psychology Cognitive, LPC, UMR 7290, Marie.Montant@univ-amu.fr 2 Institut de Neurosciences de la Timone, INT, UMR 7289, Christine.Deruelle@amu-univ.fr
Depuis l’émergence du cognitivisme dans les années 50, la pensée est considérée comme le résultat de computations proches de celles que réaliserait un ordinateur, détachées du corps organique et de l’environnement dans lequel ce corps sent et agit.
Aux antipodes du cognitivisme, une conception incarnée de la pensée – l’embodiment- a vu le jour dans les années 90.
Il s’agit cette fois d’envisager la cognition sous une approche empiriste selon laquelle les objets de pensée (par exemple, le concept de chien ou celui de liberté) seraient le fruit d’un dialogue incessant entre le corps percevant/agissant et son environnement.
Ce projet de thèse a pour objectif de repenser les représentations portées par le terme d’abstraction tel qu’il est aujourd’hui utilisé en neurolinguistique pour s’interroger plus précisément sur le mode d’encodage des mots abstraits dans le cerveau humain.
En effet, l’abstraction sémantique est souvent définie par défaut, comme le négatif du concret ou de l’imageable : le mot abstrait –liberté ou vérité- est celui qui ne se rattache pas directement à une expérience sensible.
Un chien se caresse tandis que la liberté est impalpable.
La thèse de l’embodiment suppose qu’il n’existe pas de termes « abstraits » per se : d’une part, la liberté a un sens assez concret pour une personne qui sort de prison, et d’autre part, « un chien » comme « la liberté » peuvent être considérés comme des termes singuliers généralisés à fin de classification, d’économie et de communication: sont désignés sous le terme de « chien » tous les animaux qui partagent un « air de famille ».
Notre hypothèse est que « l’air de famille » permettant de catégoriser sous un même mot abstrait (liberté) diverses situations (très variables d’un individu à l’autre, bien plus que celles associées au mot chien) repose sur les émotions (entre autres) générées par ces situations : l’accumulation des situations (scènes, événements) au cours desquelles le mot liberté est utilisé conduirait à un codage neuronal de ce mot passant par le réseau neuronal impliqué dans le ressenti émotionnel (entre autres).
Ainsi, les émotions, avec leur cortège de manifestations physiologiques, joueraient le rôle d’ancrage corporel des mots abstraits.
Notre objectif est de montrer la reconnaissance même des mots abstraits (l’accès lexical) peut être affectée dès lors que l’on agit sur leur composante émotionnelle en modifiant l’état corporel des participants.
Il s’agit donc de mettre en évidence l’existence d’une chaîne causale (bottom-up) entre modifications corporelles (physiologiques, mécaniques), émotions et traitement des mots abstraits.
Nous jouerons sur des modifications corporelles susceptibles d’entraîner des perturbations émotionnelles lesquelles devraient affecter la perception et la compréhension des mots abstraits.
Ces perturbations, qui seront physiologiques (rythme cardiaque par ex.) ou mécaniques (contraintes exercées sur les muscles expressifs du visage par ex.) devraient affecter
- de manière facilitatrice ou inhibitrice
- la reconnaissance visuelle de mots abstraits associés à des émotions, selon que la valence de ces dernières corresponde ou non à celles induites par ces perturbations.
Les études empiriques seront menées à l’aide de deux techniques d’imagerie cérébrale : l’IRMf pour la précision spatiale de ses cartes d’activation et la TMS pour sa précision temporelle et ses potentiels effets perturbateurs sur les étapes précoces de la reconnaissance des mots abstraits. Le ou la candidat.e devra avoir une solide formation en neurosciences, avoir déjà marqué son intérêt pour les approches pluridisciplinaires et être ouvert.e à des collaborations internationales (bon niveau exigé d’au moins une langue étrangère).
Supervisors : Benjamin Morillon (INS) , Kristof Strijkers (LPL) (potential) ILCB
Collaborators: Daniele Schon (INS), Andrea Brovelli (INT), Elin Runnqvist (LPL), Marie Montant (LPC) (potential) external collaborators: Anne-Lise Giraud (UNIGE), Sonja Kotz (UM), Friedemann Pulvermuller (FUB)
ILCB PhD & Postdoctoral Topic Proposal
Primary QT: QT3
Secondary QT: QT5
While traditional models proposed a strict separation between the activation of motor and sensory systems for the production versus perception of speech, respectively, by now most researchers agree that there is much more functional interaction between sensory and motor activation during language behavior. Despite this increasing consensus that the integration of sensorimotor knowledge plays an important role in the processing of speech and language, much less consensus exists on what that exact role may be as well as the functional mechanics that could underpin it.
Indeed, many questions from various perspectives remain open issues in the current state-of-the-art: Is the role of sensorimotor activation modality-specific in that it serves a different functionality in perception than production?
Is it only relevant for the processing of speech sounds or does it also play a role in language processing and meaning understanding in general?
Can sensory codes be used to predict motor behavior (production) and can motor codes be used to predict sensory outcomes (perception)?
And if so, how are such predictions implemented at the mechanistic level (e.g., do different oscillatory entrainment between sensory and motor systems reflect different dynamical and/or representational properties of speech and language processing)?
And in which manner can such sensorimotor integration go from arbitrary speech sounds to well-structured meaningful words and language behavior?
The goal of this project is to advance on our understanding on these open questions (in different ‘sub-topics’) taking advantage of the complementary knowledge of the supervisors, with B. Morillon being an expert on the cortical dynamics of sensorimotor activation in the perception of speech, and K. Strijkers being an expert on the cortical dynamics of sensorimotor activation in the production and perception of language.
At the center of the project, and as connecting Red Thread, is the shared interest of the supervisors in the role of ‘time’ (temporal coding) as a potential key factor that causes sensorimotor activation to bind during the processing of speech and language. Upon this view, ‘time’ transcends its classical notion as a processing vehicle (i.e., simple propagation of activation from sensory to motor systems and vice versa) and may reflect representational knowledge of speech and language.
One of the main goals of the current project is thus to test the hypothesis that temporal information between sensory and motor codes serves a key role in the production and perception of speech and language.
More specifically, we will explore whether sensorimotor integration during speech and language processing reflects: (a) the prediction of temporal information; (b) the temporal structuring of speech sounds and articulatory movement; (c) the temporal binding of phonemic and even lexical elements in language.
We will consider PhD-candidates and post-doctoral researchers to conduct research around any of the three topics specified above (a-c), and interested candidates can contact us via email (Benjamin Morillon: bnmorillon@gmail.com; Kristof Strijkers: Kristof.strijkers@gmail.com)
including a CV and motivational letter (1-2 pages).
Candidates who have a strong background in speech and language processing and/or knowledge of spatiotemporal neurophysiological techniques and analyses, will be considered as a strong plus.
Supervisors : Florence Gaunet (LPC), Thierry Legou (LPL) & Pr. Anne-Lise Giraud (Geneve Univ/ IMERA position from Feb to June 2019)
Implications: QT1 (Principale : motricity/motor representation involvement in speech perception),
QT3 (secondaire : animal as a model of the study of language)
Demande: Postdoc ou Doctoral grant Résumé : We intend to explore dog’s neural and perceptual responses to syllabic speech to understand speech auditory processing with reduced articulated production capabilities, and therefore reduced motor control. It might be the case that dogs only use for perceiving speech the acoustic cues they can themselves produce, i.e. short “syllable-like” intonated sounds. Alternatively, they might be sensitive to cues that they cannot produce at all.
Given the expertise of dogs in using human speech, the findings will provide insights concerning mechanisms on speech processing by the brain, i.e. the extent to which motor representation is involved in speech perception
Supervisors : Magalie Ochs
Human-Machine Interaction, Artificial agent, Affective computing, Social signal processing.
Supervisors : Eric Castet
Efficiency of a Virtual Reality Headset to improve reading in low vision persons.
Low Vision people, in contrast to blind people, have not lost the entirety of their visual functions.
The leading cause of low vision in occidental countries is AMD (Age-related Macular Degeneration), a degenerative non-curable retinal disease occurring mostly after the age of 60 years. Recent projections estimate that the total number of AMD persons in Europe will be between 19 and 26 millions in 2040.
The most important wish of people with AMD is to improve their ability to read by using their remaining functional vision.
Capitalizing on recent technological developments in Virtual Reality Headsets we have developed a VR reading platform (implemented in the Samsung - Gear VR - headset).
This platform allows us to create a dynamic system allowing readers to use augmented vision tools specifically designed for reading (Aguilar et Castet, 2017), as well as text simplification techniques currently tested in our lab.
Our project is to assess whether this reading platform is able to improve reading performance both quantitatively (reading speed, accuracy, ...) and qualitatively (comfort, stamina, ...).
Experiments will be performed in the ophthalmology department of the University Hospital of La Timone (Marseille).