Ken Knoblauch has invited Léo Varnet from ISC to come and give a talk at SBRI on thursday:
Auditory classification images: how noise can reveal the acoustic cues used in phoneme categorisation.
An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the responses of our perceptual system. In the context of speech comprehension, it is still unclear what acoustic information is used to categorize a speech stimulus as one phoneme or another. Up to now, there is no turnkey solution for isolating speech cues from natural stimuli.
In this talk, I will present a psychoacoustic imaging method, inspired by recent theoretical developments in visual psychophysics, and allowing experimenters to directly see where humans listen inside natural speech utterances. This “Auditory Classification Image” technique relies on a Generalized Linear Model with smoothness priors to link categorization errors in a speech-in-noise comprehension task with the trial-specific distribution of noise, resulting in a spectrotemporal map of the acoustic cues used to categorize phonemes.
Here I will demonstrate the effectiveness of this method through two examples: 1) a categorization task between stimuli ‘aba’ and ‘ada’ in noise, and 2) a more complex experiment on phonetic context effects in speech. Finally I will compare the Auditory Classification Image obtained by musicians and non-musicians during phoneme categorization.