The ILCB Docs & Postdocs Group

When speaking or writing, or when interpreting oral or written utterances we rely on a multifaceted competence and activate a multitude of heterogeneous processes unfolding over different time scales and observable at different levels of analysis.

A Labex is a confederation of institutions whit its own budget provided by the French National Agency for Research (ANR). The overarching aim of the BLRI is to propose a general model of language processing and of its neural bases. The support to empirical research is guaranteed by the engineers working at the CREX, the BLRI centre for the data analysis.

As a Labex, the BLRI employs an important portion of its budget to support doctoral and post-doctoral research with research grants to individual young researchers working in collaboration with researchers affiliated to the partner institutions of the BLRI. This blog is organized by the doctoral students and post-doctoral researchers of the BLRI and has the aim to report on their research and their activity as a group.

Browse by year

Noemie Te RietmolenPost-doctoral fellow : Laboratoire Parole et Langage / Institut de Neurosciences des Systèmes

Research Project: Functional role of oscillatory dynamics in motor cortex during speech perception

Project Summary :
While our knowledge on the brain structures underpinning speech perception has greatly advanced in the last decades, the neurophysiological mechanisms that can explain how humans process speech are still largely unknown.
In particular, influential theories about speech perception do not agree on the role of the motor system (Skipper et al., 2017): Dual-stream theories suggest that the motor system is not crucial (Hickok & Poeppel, 2007; Hickok, 2014), whereas opposing theories ascribe a fundamental role to the motor system in speech perception (Barnaud et al., 2018; Pulvermüller & Fadiga, 2010). A fruitful approach to understand the neural mechanisms underlying speech perception is to investigate cortical oscillations. Cortical oscillations refer to synchronized rhythmic brain activity, which is hypothesized to be important for structuring, binding and consolidating complex information in the cerebral cortex (e.g. Buzsáki & Draguhn, 2004). Given the intriguing possibility that cortical oscillations may offer a link between brain and behavior, in particular for higher-order cognitive processes such as language perception (e.g. Buzsáki, 2010), the current project sets out to investigate how brain oscillations in the motor cortex impact speech comprehension.
The objectives and hypotheses of this project are guided by current observations and proposals regarding the nature of cortical oscillations for (1) the extraction of speech sounds and (2) the extraction of meaning when perceiving language. With regard to speech sound processing (1), it has been suggested that cortical oscillations “provide the [temporal] infrastructure to parse and decode connected speech” (Giraud & Poeppel, 2012). At the level of the auditory cortex, low-frequency neural oscillations entrain to the (quasi-)rhythmic structure of the speech signal and causally contribute to speech comprehension (Peelle, 2018; Riecke et al., 2018; Zoefel et al., 2018). Moreover, neural entrainment to speech is also observed in regions beyond the auditory cortex, and in particular in the motor cortex, at the phrasal (0.6-1.3 Hz), lexical (1.3-3 Hz), and syllabic rates (3.5-4.5 Hz) (e.g. Keitel et al., 2018; Assaneo & Poeppel, 2018). One hypothesis is that such entrainment in these frequencies reflects temporal prediction derived from the temporal regularities presented in speech (e.g. Morillon & Baillet, 2017). In a similar vein, cortical oscillations related to the lexical meaning of spoken words (2) have been observed over auditory and motor cortices in the high-frequency range (beta and gamma; e.g. Pulvermüller et al., 1996; Canolty et al., 2007).
These large-scale synchronizations between fronto-central and superior temporal brain regions are hypothesized to reflect the binding of sensorimotor experiences into lexical categories (e.g. Strijkers, 2016; Garagnani et al., 2017). However, at present the functional role (if any) of such motor oscillatory activity in speech perception remains debated. In the current project, we set out to investigate the exact nature of oscillatory dynamics in the motor cortex for key components of speech perception (i.e. sound- and meaning-extraction) with two complementary studies
that each containing a behavioral and neurophysiological (magnetoencephalography; MEG) part. The behavioral experiments will assess whether activation of the motor cortex improves speech perception (and if so, under which conditions), and the MEG experiments will assess whether these potential behavioral improvements are indeed driven by frequency-specific cortical oscillations and enhanced functional coupling between motor and auditory cortical regions. In this manner, the results of this project may provide valuable insights for the theoretical development of sensorimotor integration during language processing and even highlight that specific oscillatory patterns (different frequency ranges) drive different processes involved in the perception of speech.

Curriculum Vitae : terietmolen_cv
Ladislas Nalborczyk

 

Post-doctoral fellow : Laboratoire de Psychologie Cognitive & Laboratoire de Neurosciences Cognitives
Research Project :
An investigation into the inhibitory mechanisms underlying covert verbal actions
Project Summary :
The main goal of this project is to tackle the problem of motor inhibition during covert speech and imagined typing, where covert speech is considered as the mental imagery of overt speech. Put simply, how can we imagine raising our arm without actually raising our arm? How can we imagine a conversation without actually producing it overtly? What are the cognitive and neural mechanisms that operate in order to prevent motor execution? How (where and when) are these mechanisms neurally implemented? Can we enhance or degrade these inhibitory mechanisms online? These questions and the problem of motor inhibition emerge from the use of concepts such as simulation or emulation to explain the phenomenon of motor imagery. These views suggest that motor imagery, defined as the mental representation of an action, without overt execution, would result from the simulation or emulation of actual execution. However, this raises the question of how it is possible for imagination of action to not lead to actual execution. We will tackle these questions using novel behavioural paradigms and transcranial magnetic stimulation in a series of five experiments.
Curriculum Vitae : Ladislas Nalborczyk CV
Mitja NikolausPhD student : Laboratoire Parole et Langage / Laboratoire d'Informatique et Systèmes

Research Project :

Development of Children's Communicative

Project Summary :
Research Lab: CoCoDev
The study of how the ability for coordinated communication emerges in development is both an exciting scienti c frontier | at the heart of debates about the uniqueness of human cognition (Tomasello, 2014) | as well as an important applied issue for AI (Antle, 2013).
Early signs of coordination (e.g., through gaze and smile) can be found in preverbal infants (Yale, 2003), but the ability to engage in coordinated verbal communication (Clark, H. & Brennan, 1991) takes years to mature.
Learning such coordination, especially with the caregivers, is crucial for the child's healthy cognitive development (Ho , 2006; Gelman, 2009).
Very few studies examined the nature of children's communicative coordination and its development in the natural environment (that is, outside controlled laboratory studies).
Further, existing naturalistic studies (e.g., Clark, E. 2015), though insightful, have been based on anecdotal observations, leading to rather qualitative conclusions.
Thus, previous work did not provide any theoretical model that could explain, quantitatively, the naturally occurring data, let alone provide a basis for theory-informed applications. This project will contribute to ll this gap.
We will combine AI tools from NLP and Computer Vision to study the multimodal dynamics of children's communicative coordination with caregivers, laying the foundation for a data-driven model that would 1) provide us with a scienti c understanding of the natural phenomena and 2) guide us through the design of child-computer interaction systems that can be used to test and evaluate the model.

References
Antle (2013). Research opportunities: Embodied child{computer interaction. International Journal of Child-Computer Interaction.
Clark, E. (2015) Common ground. The Handbook of Language Emergence.
Clark, H. & Brennan (1991). Grounding in communication. Perspectives on socially shared cognition.
Gelman (2009). Learning from others: Children's construction of concepts. Annual review of psychology.
Ho (2006). How social contexts support and shape language development. Developmental Review.
Tomasello (2014). A natural history of human thinking. Cambridge, MA: Harvard University Press.
Yale, Messinger, Cobo-Lewis, & Delgado (2003). The temporal coordination of early infant communication. Developmental Psychology.

E- mail : mitja.nikolaus@univ-amu.fr
Web link :
Curriculum Vitae : CV-MitjaNikolaus

Isaih MohamedPhD student : Institut de Neurosciences des Systèmes / Laboratoire de Phonétique et Phonologie

Research Project: Bridging communication in behavioural and neural dynamics

Project Summary :
The aim of this project is to bridge interpersonal verbal coordination and neural dynamics. In practice, we will collect neurophysiological data on individuals (mostly patients with intracranial recordings) performing different interactive language tasks. We will use natural language processing methods to estimate objective features of verbal coordination on speech/language signals. Then we will use machine learning and information theory driven approaches to bridge the dynamics of the coordinative verbal behavior to spatio-temporal neural dynamics.
More precisely, we plan to use several tasks that have been proven to be efficient in the study of verbal interactions. Some tasks are rather constrained and controlled (allowing to manipulate the coordinative dynamics) while others assess conversation in more natural conditions. Speech recordings allow quantifying coordination at different linguistic levels in a time resolved manner. These metrics can then be used to interpret changes in neural dynamics as a function of verbal coordination. We plan to use different approaches, a machine learning approach (decoding the speech signal of the speaker based on the neural signal of the listener) as well as information-theoretic approach (to model to what extent the relation between neural signals and upcoming speech is influenced by the current level of coordination estimated by convergence, for instance).
Overall, this project will allow gathering a better understanding of the link between behavioural coordinative dynamics and neural dynamics. For instance, compared to simple coordinative dynamics, more difficult coordinative behaviour will probably require a change in the ratio between top-down and bottom-up connections between frontal regions and temporal regions in specific frequency bands (increase of top-down beta and decrease of bottom-up gamma).
The strength of this project is to merge sophisticated coordination designs, advanced analysis of verbal coordination dynamics and front edge neuroscience tools with unique neural data in humans.

web link :
Curriculum Vitae : CV - Isaïh
Régis            ManciniPhD student : Laboratoire de Psychologie Cognitive / Laboratoire de Neurosciences cognitives

Research Project: Ouvrir une fenêtre sur l'esprit des lecteurs : Détermination par TMS et EEG du réseau cortical impliqué dans le comportement oculomoteur de lecture

Project Summary :

Les mouvements oculaires pendant la lecture ont été étudiés depuis plus d’un siècle, révélant un comportement très stéréotypé, en dépit même d’une importante variabilité de l’amplitude des saccades et des positions des fixations sur les lignes de texte. La majorité des modèles proposés pour rendre compte de ce comportement repose sur un guidage cognitif du regard, et suppose donc un contrôle essentiellement descendant. Ces modèles descendants sont néanmoins contredits par le fait rapporté récemment qu’un modèle analphabète de programmation des saccades dans le colliculus supérieur, une structure sous-corticale multi-intégrative, prédise assez fidèlement le comportement oculomoteur des lecteurs simplement à partir de traitements visuels précoces effectués dès la rétine (contraste de luminance). Ce résultat suggère au contraire un rôle secondaire du néocortex dans le contrôle oculomoteur pendant la lecture.
La thèse envisagée aura pour but d’une part de caractériser le réseau cortical impliqué dans le contrôle oculomoteur pendant la lecture, et d’autre part de déterminer la dynamique temporelle d’activation de ces différentes aires corticales. Ces recherches reposeront d’abord sur l’utilisation de la stimulation magnétique transcrânienne (TMS), permettant d’inactiver transitoirement une aire corticale donnée chez des participants sains, conjointement à l’enregistrement des mouvements oculaires pendant une tâche de lecture de phrases. L’effet de l’inactivation d’une aire corticale donnée sur les comportements oculomoteurs classiquement observés renseignerait donc de son implication dans la lecture. Dans un second temps, les études TMS seront complétées par une approche basée sur des enregistrements électroencéphalographiques (EEG).

English:

Eye movements during reading have been studied for more than a century, revealing a very stereotyped behaviour, despite a significant variability in the amplitude of saccades and the positions of fixations on the lines of text. Most of the models proposed to account for this behaviour are based on a cognitive guidance of the gaze, and therefore presuppose an essentially top-down control. These top-down models are nevertheless contradicted by the recently reported fact that an illiterate model of saccade programming in the superior colliculus, a multi-integrative subcortical structure, fairly accurately predicts the oculomotor behaviour of readers simply from early visual processing (luminance contrast). This result suggests on the contrary a secondary role of the neocortex in oculomotor control during reading.
The thesis aims on the one hand to characterize the cortical network involved in oculomotor control during reading, and on the other hand to determine the temporal dynamics of activation of these different cortical areas. This research is primarily based on the use of transcranial magnetic stimulation (TMS), which temporarily inactivates a given cortical area in healthy participants, in conjunction with the recording of eye movements during a sentence-reading task. The effect of the inactivation of a given cortical area on the oculomotor behaviours classically observed would therefore indicate its involvement in reading. In a second step, TMS studies will be complemented by an approach based on electroencephalographic (EEG) recordings.

web link :
Curriculum Vitae : CV_R_gis__Copy_
Chiara MazzocconiPost-doctoral fellow : Laboratoire Langage et Parole 

Research Project: Growing and learning with laughter

Project Summary :
Laughter, emerging around 3 months of age, is one of the earliest means that an infant has to convey meaning, practising turn-taking, attention sharing, directing other’s attention and contribute to interaction at the same level of an adult. Through development its use becomes more and more sophisticated both from a semantic and pragmatic perspective, being closely entangled with language production to convey meaning multimodally. Laughter both when occurring in relation to humour or not, can give us important insights into the child’s cognitive, linguistic and pragmatic development on different levels of observation. Nevertheless, there is a dearth of research on the development of laughter, especially in interaction.
The goal of my current project is to deepen our characterization of laughter development longitudinally in itself and in relation to speech, language, humour and pragmatic abilities.
I will conduct longitudinal cross-linguistic corpus studies investigating laughter behaviour in child-caregiver interactions. I will compare the development of laughter behaviour in children with atypical language or pragmatic development with that of typically developing children during the first years of life. The main aim is to test the hypothesis that laughter can be an early bio-marker concerning language and pragmatic development.
In addition to corpus studies, I will run experiments that will contribute to deepening our understanding of laughter perception and interpretation in typical development and in children within the autistic spectrum, as well as shedding light on the role of laughter in pragmatic reasoning and non-literal meaning processing, with a special interest for irony.

Curriculum Vitae : cv_ChiaraMazzocconi_2020.11
Birgit RauchebauerPost-Doctoral LNC / LPL
Elliot HuggettPhD student LPL / LPC
Clément VerrierPhD student, Institut de Mathémaques de Marseille / Institut de Neurosciences des Systèmes

Research Project :

Wavelet-based muldimensional characterizaon of brain networks in language tasks

Project Summary :

Brain function involves complex interactions between cortical areas at different spatial and temporal scales. Thus, the spatio-temporal definition of brain networks is one of the main current challenges in neuroscience. With this objective in view, electrophysiological techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) offer a fine temporal resolution that allows capturing fast changes (at the level of the millisecond) across a wide range of frequencies (up to 100 Hz).
However, the spatial aspects require solving a difficult (extremely ill-posed) inverse problem that projects the signals recorded at the level of surface sensors to the cortex. Current techniques for extracting spatio-temporal networks in MEG and EEG suffer from the inherent difficulties arising from solving the inverse problem. We propose to use a novel wavelet analysis approach in order to improve the extraction of language networks from MEG signals. The methods will be validated using simultaneous MEG-intracerebral EEG recordings. More precisely, the objective is to develop algorithms and data analysis procedures for spatio-temporal characterization of brain networks across multiple frenquencies, for EEG and MEG signals, validate them on simulated and real signals, and apply the developed methodology on language protocols in the framework of ILCB.
Web link :
Curriculum Vitae : CV_Clement Verrier
Shuai WangPost-Doctoral fellow, Laboratoire Parole et Langage .

Research Project :

Multimodal study of functional organization of the Visual Word Form Area and its communication with the spoken language system

Project Summary :

At ILCB, the goal of my research is to investigate 1) the fine-scale spatial organization of functionally segregated neuronal populations within the visual word form area (VWFA) and 2) the activation time course of the VWFA in response to speech as well as the temporal dynamics of the communication between this area and the spoken language network. Broadly, I’m interested in the functional integration and segregation of the language system.

web link :
Curriculum Vitae : CV_ShuaiWang
Nicole VogesPost-Doctoral fellow, Institut de Neurosciences de la Timone / Laboratoire de Neurosciences Cognitives

Research Project :

INFORMATION DYNAMIC METRICS TRACK THE EMERGENCE OF COGNITIVE INFORMATION
PROCESSING FROM NEURAL CIRCUIT DYNAMICS

Project Summary :

Cognitive function arises from the coordinated activity of neural populations distributed over largescale brain networks. However, it is challenging to understand how specific aspects of neural dynamics translate into operations of information processing, and, ultimately, cognitive functions. To address this question, we combine novel approaches from information theory with computational simulations of canonical neural circuits, emulating well-defined cognitive functions. Specifically, we simulate circuits composed of one or multiple brain areas, each modeled as a 1D ring network of simple rate units. Despite its simplicity, such model can give rise to rich neuronal dynamics [1]. These models can be used to reproduce functions such bottom-up transfer of stimuli, working memory and even top-down attentional modulation [2].
We then apply recent tools from the Information Dynamics framework to simulated data. Information Dynamics is a novel theoretical approach that formalize the decomposition of generic information processing into “primitive” operations of active storage, transfer and modification of information [3]. In particular, we analyze simulated recordings from our models, quantifying how its nonlinear dynamics implement specific mix of these different primitive processing operations, varying depending on the emulated cognitive function. For instance, we show that the neuronal subsets maintaining sensory representations in working memory (via reverberant self-sustained activity) can be revealed by high values of the active Information Storage metric. Or, the integration of top-down signals (mediated by nonlinear interactions between active sub-populations) is detected by increased values of information modification.
Our models thus highlight transparently the capacity of information dynamics metrics to characterize which network units participate to cognition-related information processing, and how they do it. Such capability can be exploited for the analysis of actual human MEG datasets.
References
1. Roxin, A., Brunel, N., Hansel, D. (2005). Physical Review Letters 94(23), 238103
2. Ardid, S., Wang, X., Compte, A. (2007). J Neurosci 27(32), 222
3. Wibral, M., Priesemann, V., Kay, J., Lizier, J., Phillips, W. (2017). Brain and cognition 112, 25

web link :
Curriculum Vitae : cvNVoges2020description
Etienne ThoretPost-Doctoral fellow, Perception Représentations Image Son Musique / Laboratoire d'Informatique et Systèmes

Research Project :

Breaking the acoustical code of brain by interpreting machine hearing

Project Summary :

I'm a sound and hearing researcher interested to decypher the neurocomputational bases of audition. My researches combine advanced mathematical modeling of sound signals with statistical learning techniques, behavioral testing and neuroinspired techniques in order understand how these processes guide human communication and behaviour.

I'm currently post-doc between the Perception, Representation, Image, Sound, Music lab (PRISM) and the Laboratoire d'Informatique & Systèmes lab (LIS) in Marseille through the Institute of Language Communication & the Brain (ILCB) of Aix-Marseille University. I'm advised by Richard Kronland-Martinet (PRISM) and Valentin Emiya & Stéphane Ayache (LIS).

I'm grateful having been advised by Daniel Pressnitzer & Christian Lorenzi at the Ecole Normale Supérieure de Paris, Stephen McAdams & Philippe Depalle at McGill University in Montreal, and by Sølvi Ystad & Mistuko Aramaki at the CNRS Mechanics and Acoustic Lab in Marseille.

Selected publications :

  • Thoret, E., Andrillon, T., Leger, D., Pressnitzer, D. (2020) Probing machine-learning classifiers using noise, bubbles, and reverse correlation, bioRxiv, bioRxiv 2020.06.22.165688, 10.1101/2020.06.22.165688
  • Thoret, E., Caramiaux, B., Depalle, P., McAdams, S. (In press) Learning metrics on spectrotemporal modulations reveals the perception of musical instrument timbre, Nature Human Behaviour. 10.1038/s41562-020-00987-5
  • Thoret, E., Depalle, P., McAdams, S. (2016) Perceptually salient spectro-temporal modulations for recognition of sustained musical instruments. The Journal of the Acoustical Society of America, 140(6), EL478-EL483. 10.1121/1.4971204
  • Thoret, E., Aramaki, M., Kronland-Martinet, R., Velay, J. L., Ystad, S. (2014) From Sound to Shape: Auditory Perception of Drawing Movements, Journal of Experimental Psychology: Human Perception and Performance, 40(3), 983-994.
  • Thoret, E., Aramaki, M., Bringoux L., Ystad S., Kronland-Martinet R. (2016) Seeing circles and drawing ellipses: when sound biases reproduction of visual motion. PLoS one, 11(4):e0154475. 10.1371/journal.pone.0154475
Curriculum Vitae : CV.THORET
Snežana TodorovićPhD student LPL
Shinji SagetPhD student INT / LIS
Kep Kee LohPost-Doctoral Fellow, Institute de Neurosciences de la Timone  & Laboratoire de Psychologie Cognitives

Research Project :

Nested cortical sulci organisation models for human and non-human primate inter-species comparisons

Project Summary :

Inter-species comparisons of brain organization between human and non-human primates can provide insights into how uniquely human abilities, such as speech and language, emerged through primate brain evolution. While brain organization can be described in many ways, we focus primarily on cortical folding patterns, or sulci, which are critical landmarks that are strongly tied to the functional and histological features of the brain.

The first goal of my project is to construct the first cortical sulci models that describe the organisation of brain folding patterns (sulci) in four primate species: macaques, baboons, chimpanzees and humans. On the basis of common/homologous sulci, these models allow the registration of individual brains both, within the same species, and across species, for brain comparisons. The second goal of my project is then to apply these models to study how the primate vocal control brain network has changed across the four primate species to understand how speech and language areas emerge in the human brain.

Curriculum Vitae : KKLOH_CV_
Raphaël FargierPost-Doctoral LNC / LPL
Akrem SellamiPost-Doctoral INT / LIS
Axel BarraultPhD student LPL
Tom DagensPhD student INT / LIF
Filippi PieraPost-Doctoral LPC / LPL
Anders RoycePost-Doctoral LPC
Alexia FasolaPhD student LPC / INS
Mathieu RiouPhD student LIA / INT
Leonardo LanciaPost-Doctoral Fellow, Institute de Neurosciences de la Timone  & Laboratoire Parole et Langage

Research Project :

Nature and function of the physiological coordination between speakers involved in linguistic interactions

Project Summary :

In this project we study how participants coordinate their sensory-motor activities during linguistic interactions and conversations. We will test the hypothesis that participants in linguistic interactions react to changes in the behaviors of their interlocutors at many levels of physiological activity, so that their behavior is automatically coordinated as it occurs to subsystems of a complex dynamical system. It has been proposed that this kind of coordinated behavior brings the participants to a conversation into aligned cognitive states favoring the buildup of a shared common ground and contributing to mutual understanding. In order to understand how coordinative relations between the sensory-motor processes across interlocutors help in structuring conversational interactions we will i) quantify the amount of information exchanged at the physiological level between speakers during coordinated activity; ii) study how and if physiological coordination has an impact on the coordination of their speech acts and makes communication more efficient.

I General background
Research on speech communication increasingly focuses on how speakers interact with each other in conversational exchanges. Investigations in this domain have been made possible by recent advances in a large variety of disciplines such as neurophysiology (e.g.: Decety and Chaminade, 2005), social cognition (e.g.: Frith & Frith, 2012) and social neurosciences (e.g.: Sänger et al., 2011). The overarching goal of this project is to explore the relationships between low-level, sensory-motor coordination across speakers, on the one hand, and high-level, conversational patterns, on the other hand, in verbal interactions. To this aim we will first test if inter-speaker coordination in laboratory tasks is achieved through coordinative relations linking the behavior of physiological processes across speakers. We will then relate the behavior of speakers to the behavior of theoretical models of coordination by studying the coordination between real speakers and virtual agents. Finally we will generalize our findings to more natural conversation tasks. This work will be conducted in the framework of a collaboration between the LPL (Noël Nguyen, Laurent Prévot) and the INT (Thierry Chaminade).

Material and methods. This project is based on the analysis of the interactions between various kinds of physiological (respiratory activity; fMRI neuro-imaging) and physical (vocal tract movements, head and body movements; amplitude modulations of the acoustic signals at different time scales) signals and linguistic observables (speech errors, conversational turn boundaries, turn construction units, prosodic boundaries and task-dependent indexes of communicative efficiency). Data collected during this project thanks to the experimental platforms of the BLRI will be integrated with data collected and annotated by our collaborators (simultaneously recorded EMA articulatory data from two speakers, acoustic and breathing cycle data from semi-spontaneous conversations and acoustic and video conversational speech data from Map Task sessions). A major feature of this project relates to the fact that the characterization of between-speaker coordinative patterns will be made using methods originally developed for modeling the behavior of coupled dynamical systems. Some of these methods as Joint Recurrence Analysis (Zou et al. 2011) and Cross Convergent Mapping (Sugihara et al. 2012) allow to detect dependencies between natural systems as heterogeneous as those modeling the behavior of different speakers. In recent work we have proposed a variant of Recurrence Analysis allowing to detect repeated patterns also in time series produced by processes characterized by strong temporal nonstationarity as those observed in speech and goal oriented behavior (e.g.: Lancia & Tiede, 2012; Lancia, Fuchs and Tiede, 2014). This work paves the way for a general application to speech of both Joint Recurrence Analysis and Cross Convergent Mapping as both these methods rely on the individuation of repeated patterns.

Anna MarczykPost-Doctoral LPL
Joshua SnellPhD student LPC
Clementine BodinPhD student INT / LSIS
Michele ScaltrittiPost-Doctoral LPC / LNC
Mathieu DeclerckPost-Doctoral LPC
Nuria Esteve gibertPost-Doctoral LPL
Andrea ValentePost-Doctoral LPC
Jean baptiste BernardPost-Doctoral LPC / LIS / LPL
Anna Elisabeth BeyersmannPost-Doctoral LPC / LPL
Ambre Denis-NoëlPhD student LPL / LPC
Eva DittingerPhD student LPL / LNC
Amandine Michelas-PoggioliPost-Doctoral LPL / INS
Veronica MontaniPost-Doctoral LPC
Aurélie LagarriguePost-Doctoral LPL / LNC
Jorane Saubesty PhD student LPL / CRVM
Imed Laaridh PhD student LIA / LPL
Caralyn KempPost-Doctoral LPL / LPC
Jérémy DannaPost-Doctoral LPL / LNC