“Interpreting machine learning ? Why and how ? “
Machine learning and deep neural networks have been raised as compelling models to simulate complex tasks in language, communication and brain sciences. But what do we really understand about these model and how they process information? As users, we often use them as tools without precisely understanding their mechanistic and representational underpinnings. It is now crucial to go through the interpretation of machine learning but interpreting might have different meaning: while perceptual researchers might aim to understand how a convolutional networks can be interpreted in terms of nonlinear filtering or brain activations like patterns, a language researcher might try to decipher the role and meaning of the recursive computations made by transformer networks.
The ILCB bridges research in language, communication and brain sciences all of which are susceptible of benefiting from the use of machine learning, with research in informatics and mathematics, directly concerned with machine learning as a topic in its own right.
Short talks :
1) Etienne Thoret – Deciphering the neuroacoutical bases of hearing by interpreting biomimetic deep-neural-networks
2) Philippe Blache – Is language processing incremental? A comparison between Transformer and RNN-based language models and their ability to model human language processing.
If you are interested to participate to this round table and talk about your research and your view about interpretability in Machine Learning during 5/10 minutes max before a discussion, please send us a title and a short abstract (5 lignes) before the 29 of january 2021 at email@example.com