Lexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentences |
| |
Authors: | Davis Matthew H Johnsrude Ingrid S Hervais-Adelman Alexis Taylor Karen McGettigan Carolyn |
| |
Affiliation: | MRC Cognition and Brain Sciences Unit, Cambridge, UK. matt.davis@mrccbu.cbu.cam.ac.uk |
| |
Abstract: | Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation. |
| |
Keywords: | |
本文献已被 PubMed 等数据库收录! |
|