Support for context effects on segmentation and segments depends on the context |
| |
Authors: | Christopher C. Heffner Rochelle S. Newman William J. Idsardi |
| |
Affiliation: | 1.Program in Neuroscience and Cognitive Science,University of Maryland,College Park,USA;2.Department of Hearing and Speech Sciences,University of Maryland,College Park,USA;3.Department of Linguistics,University of Maryland,College Park,USA |
| |
Abstract: | Listeners must adapt to differences in speech rate across talkers and situations. Speech rate adaptation effects are strong for adjacent syllables (i.e., proximal syllables). For studies that have assessed adaptation effects on speech rate information more than one syllable removed from a point of ambiguity in speech (i.e., distal syllables), the difference in strength between different types of ambiguity is stark. Studies of word segmentation have shown large shifts in perception as a result of distal rate manipulations, while studies of segmental perception have shown only weak, or even nonexistent, effects. However, no study has standardized methods and materials to study context effects for both types of ambiguity simultaneously. Here, a set of sentences was created that differed as minimally as possible except for whether the sentences were ambiguous to the voicing of a consonant or ambiguous to the location of a word boundary. The sentences were then rate-modified to slow down the distal context speech rate to various extents, dependent on three different definitions of distal context that were adapted from previous experiments, along with a manipulation of proximal context to assess whether proximal effects were comparable across ambiguity types. The results indicate that the definition of distal influenced the extent of distal rate effects strongly for both segments and segmentation. They also establish the presence of distal rate effects on word-final segments for the first time. These results were replicated, with some caveats regarding the perception of individual segments, in an Internet-based sample recruited from Mechanical Turk. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|