Loading Events

« All Events

  • This event has passed.

Interpreting machine learning in hearing, communication and language sciences: why, how, and the current challenges

February 11 @ 12:00 - 14:30

Here is the link to the event : https://univ-amu-fr.zoom.us/j/99484117315?pwd=d3gyYmlDa2QycUpET3pxZ2x0LzNVQT09

“Interpreting machine learning ? Why and how ? “

In the context of a cycle of talks organized by the ILCB post-docs, we are organising February, 11th (online) a talk/round table on interpretability.

This event is dedicated to be informal and aims to be a place for discussing our point of view and/or needs on interpretability, in the context of language and hearing sciences, but not only, all contributions and points of view are very welcome.
Please find below the original call with an updated program.
Machine learning and deep neural networks have been raised as compelling models to simulate complex tasks in language, communication and brain sciences. But what do we really understand about these model and how they process information? As users, we often use them as tools without precisely understanding their mechanistic and representational underpinnings. It is now crucial to go through the interpretation of machine learning but interpreting might have different meaning: while perceptual researchers might aim to understand how a convolutional networks can be interpreted in terms of nonlinear filtering or brain activations like patterns, a language researcher might try to decipher the role and meaning of the recursive computations made by transformer networks.
The ILCB bridges research in language, communication and brain sciences all of which are susceptible of benefiting from the use of machine learning, with research in informatics and mathematics, directly concerned with machine learning as a topic in its own right.
Program :
12h/12h30 – Etienne Thoret (Post-doc ILCB, PRISM, LIS) – Deciphering the acoustical bases of hearing by interpreting biomimetic deep-neural-networks (20 min + 10 min)
12h30/13h – Philippe Blache (LPL) – Is language processing incremental? A comparison between Transformer and RNN-based language models and their ability to model human language processing.  (20 min + 10 min)
13h/13h30 –  Ronan Sicre (LIS) – Visual interpretability of deep neural networks: a brief overview.  (20 min + 10 min)
13h30/13h45 Adrià Torrens (University of Ostrava) Building a grammar for gradient linguistic evaluative expressions: Do Machine learning, neuronal networks, and deep learning help? (10 min + 5min)
13h45/14h30 – Discussion (45 minutes)

Etienne Thoret, 




February 11
12:00 - 14:30
Event Categories:


View Organizer Website