A major debate in cognitive neuroscience concerns whether brain asymmetry for speech and music emerges from differential sensitivity to acoustical cues or from domain-specific neural networks. This debate is closely related to the question of the origins of hemispheric specialization. In the current project, we will investigate the hypothesis that acoustic cues drive hemispheric lateralization and that hemispheric lateralization occurs independently from the presence of speech or music domain-specific processes. To this end, we will combine a new approach to filter specific portions of the acoustic signal (spectral or temporal modulations) with functional MRI (fMRI) recordings, while healthy participants listen to sounds whose acoustic parameters are tightly controlled to carry complementary information in the spectral and temporal modulation dimensions. Overall, we hypothesize that while spectral modulations drive right-lateralized auditory responses, temporal modulations drive left-lateralized activity, hence providing a fully domain-general account on auditory hemispheric asymmetry.