ACLC Researcher Katerina Chládková has been awarded an NWO Rubicon grant
Piecing Auditory Cues Together: How We Learn to Integrate Cues in Speech Perception
When perceiving the surrounding world, humans attend to multiple pieces of information (individual properties of objects, or cues), and perceptually integrate them to identify familiar objects as a whole. For speech, this means that listeners integrate multiple auditory cues (e.g., duration, frequency) to comprehend sounds of their language. Surprisingly, however, when humans encounter unfamiliar objects, they readily use a single cue to identify them, but have difficulties with using multiple cues for that purpose.
Piecing Auditory Cues Together: How We Learn to Integrate Cues in Speech Perception
The difficulty with relying on multiple cues for novel speech sound categories runs contrary to the widely attested integration of multiple cues in the perception of familiar ones. This project aims to resolve that controversy and find out how humans acquire the ability to integrate cues in speech perception. The goal is to identify the developmental trajectory of cue integration, determine which factors affect it, and reveal whether cue integration in speech is driven by general mechanisms of auditory learning. Adults will learn novel sounds and their neural activity will be measured to uncover how cue integration learning proceeds. The data will be modelled with artificial neural networks. The findings will contribute towards our better understanding of the learning mechanisms that form a crucial part of human cognition.
