Recent news

Magalie Ochs

Magalie Ochs is Associate Professor in Computer Science at Aix-Marseille University in the Laboratoire d’Informatique et des Systèmes (LIS). Since her master in Artificial Intelligence at Montréal University, Magalie’s research aims at integrating social and emotional intelligence in social robots and virtual agents. She has worked in several national and international laboratories: University Paris 8, Orange Lab, University Paris 6 (LIP6), National Institute of Informatics in Tokyo, Telecom ParisTech. She has explored different computational methods and models to endow social robots and virtual characters with socio-emotional capabilities: perception, reasoning and expression.

In an interdisciplinary approach, Magalie’s research focuses on the investigation of human-human and human-machine corpora based on machine learning approaches to model and understand the social-emotional dynamic of the interaction. The application framework of her research is the social skills training through interaction with socially virtual agent, as for instance the bad news training for doctors (ANR Acorformed), the public speaking training (ANR REVITALISE), the training for gender equality (AMPIRIC funded Project to reduce to the threat of stereotypes & CNRS Innovation project to train against ordinary sexism).

Magalie Ochs leads the French community on Affect, Artificial Companion, and Interaction (ACAI) as well as several interdisciplinary research projects aiming at improving human-machine interaction with interactive humanoid systems.

Predicting Feedback Position and Type During Conversation

During natural interactions, listeners produce reactions in response to speakers, a phenomenon known as conversational feedback. Feedback depends on the main speaker’s production. Our model uses multimodal features to predict when and what type of feedback will occur during conversations. Throughout the conversations, the model computes the feedback probability (the black curve on the top panel) and predicts feedback when this probability exceeds a given threshold (vertical red lines). Predicted feedback and observed feedback (blue areas) are then compared to assess model performance. By leveraging features like morpho-syntax, prosody, and gestures in the speaker’s production, it identifies key moments that trigger listener responses. This computational model provides precise descriptionf of the context in which feedback occurs.

Auriane Boudin, Roxane Bertrand, Stéphane Rauzy, Magalie Ochs, and Philippe Blache
A Multimodal Model for Predicting Feedback Position and Type During Conversation
Speech Communication 159 (April 2024):103066  —  @HAL

Arielle, the Guinea baboon

Arielle is a Guinea baboon who recently gave birth to a daughter named Uyu, “the one who answers the call” in Wolof. She, her daughter, and the rest of the colony are residents of the primatology centre in Rousset where the CRPN’s Primate Behaviour and Cognition platform is located. Arielle voluntarily takes part in comparative cognition research using a fully automatised computerised system named the Automatic Learning Device for Monkeys, or ALDM.

Like all the members of her group, Arielle can access ALDM at any time from her enclosure. There, she is automatically recognised by the computer using an RFID system. A controlling server assigns her a cognitive task to perform on a touchscreen. At each successful trial, she receives a small food reward; if she makes a mistake, a green screen appears. Every day, Arielle performs about 1500 trials, which takes about 2 hours of her time.

Using this system, Arielle has taken part in numerous published studies on executive control, perception, attention, working memory, abstract reasoning, social cognition or communication and language (see papers here or here). Through her daily routine, Arielle and the rest of her group help us understand how baboons think and what they can or cannot learn, progressively enriching our understanding of cognitive evolution.

(Photo Siham Bouziane)

Baby Baboon Brain Anatomy Predicts Which Hand They Will Use to Communicate

The planum temporale is a brain area essential for language in humans. In the majority of baboons, this area is larger in the left than in the right hemisphere (shown in red and green, respectively). Baby baboons with this early larger left-than-right PT asymmetry, and only them, will develop a preference for gestural communication with the right hand once they have reached the appropriate age, as shown in red on the left of the graph.

Yannick Becker, Romane Phelipon, Damien Marie, Siham Bouziane, Rebecca Marchetti, Julien Sein, Lionel Velly, et al.
Planum Temporale Asymmetry in Newborn Monkeys Predicts the Future Development of Gestural Communication’s Handedness
2004. Nature Communications 15 (1): 4791  —  @HAL