首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Trading relations show that diverse acoustic consequences of minimal contrasts in speech are equivalent in perception of phonetic categories. This perceptual equivalence received stronger support from a recent finding that discrimination was differentially affected by the phonetic cooperation or conflict between two cues for the /slIt/-/splIt/contrast. Experiment 1 extended the trading relations and perceptual equivalence findings to the /sei/-/stei/contrast. With a more sensitive discrimination test, Experiment 2 found that cue equivalence is a characteristic of perceptual sensitivity to phonetic information. Using “sine-wave analogues” of the /sei/-/stei/stimuli, Experiment 3 showed that perceptual integration of the cues was phonetic, not psychoacoustic, in origin. Only subjects who perceived the sine-wave stimuli as “say” and “stay” showed a trading relation and perceptual equivalence; subjects who perceived them as nonspeech failed to integrate the two dimensions perceptually. Moreover, the pattern of differences between obtained and predicted discrimination was quite similar across the first two experiments and the “say”-“stay” group of Experiment 3, and suggested that phonetic perception was responsible even for better-than-predicted performance by these groups. Trading relations between speech cues, and the perceptual equivalence that underlies them, thus appear to derive specifically from perception of phonetic information.  相似文献   

2.
3.
4.
5.
A previous study (Ackermann, Gr?ber, Hertrich, & Daum, 1997) reported impaired phoneme identification in cerebellar disorders, provided that categorization depended on temporal cues. In order to further clarify the underlying mechanism of the observed deficit, the present study performed a discrimination and identification task in cerebellar patients using two-tone sequences of variable pause length. Cerebellar dysfunctions were found to compromise the discrimination of time intervals extending in duration from 10 to 150 ms, a range covering the length of acoustic speech segments. In contrast, categorization of the same stimuli as a "short" or "long pause" turned out to be unimpaired. These findings, along with the data of the previous investigation, indicate, first, that the cerebellum participates in the perceptual processing of speech and nonspeech stimuli and, second, that this organ might act as a back-up mechanism, extending the storage capacities of the "auditory analyzer" extracting temporal cues from acoustic signals.  相似文献   

6.
Developmental research reporting electrophysiological correlates of voice onset time (VOT) during speech perception is reviewed. By two months of age a right hemisphere mechanism appears which differentiates voiced from voiceless stop consonants. This mechanism was found at 4 years of age and again with adults.A new study is described which represents an attempt to determine a more specific basis for VOT perception. Auditory evoked responses (AER) were recorded over the left and right hemispheres while 16 adults attended to repetitive series of two-tone stimuli. Portions of the AERs were found to vary systematically over the two hemispheres in a manner similar to that previously reported for VOT stimuli. These findings are discussed in terms of a temporal detection mechanism which is involved in speech perception.  相似文献   

7.
Phonetic segments are coarticulated in speech. Accordingly, the articulatory and acoustic properties of the speech signal during the time frame traditionally identified with a given phoneme are highly context-sensitive. For example, due to carryover coarticulation, the front tongue-tip position for /1/ results in more fronted tongue-body contact for a /g/ preceded by /1/ than for a /g/ preceded by /r/. Perception by mature listeners shows a complementary sensitivity--when a synthetic /da/-/ga/ continuum is preceded by either /al/ or /ar/, adults hear more /g/s following /l/ rather than /r/. That is, some of the fronting information in the temporal domain of the stop is perceptually attributed to /l/ (Mann, 1980). We replicated this finding and extended it to a signal-detection test of discrimination with adults, using triads of disyllables. Three equidistant items from a /da/-/ga/ continuum were used preceded by /al/ and /ar/. In the identification test, adults had identified item ga5 as "ga,' and dal as "da,' following both /al/ and /ar/, whereas they identified the crucial item d/ga3 predominantly as "ga' after /al/ but as "da' after /ar/. In the discrimination test, they discriminated d/ga3 from da1 preceded by /al/ but not /ar/; compatibly, they discriminated d/ga3 readily from ga5 preceded by /ar/ but poorly preceded by /al/. We obtained similar results with 4-month-old infants. Following habituation to either ald/ga3 or ard/ga3, infants heard either the corresponding ga5 or da1 disyllable. As predicted, the infants discriminated d/ga3 from da1 following /al/ but not /ar/; conversely, they discriminated d/ga3 from ga5 following /ar/ but not /al/. The results suggest that prelinguistic infants disentangle consonant-consonant coarticulatory influences in speech in an adult-like fashion.  相似文献   

8.
Dijker AJ 《Cognition》2008,106(3):1109-1125
In order to examine the relative influence of size-based expectancies and social cues on the perceived weight of objects, two studies were performed, using equally weighing dolls differing in sex-related and age-related vulnerability or physical strength cues. To increase variation in perceived size, stimulus objects were viewed through optical lenses of varying reducing power. Different groups of participants were required to provide magnitude estimates of perceived size, physical strength, or weight, or of expected weight. A size-weight illusion (SWI) was demonstrated, such that smaller objects felt heavier than larger ones, that was entirely accounted for by the mediating role of expected weight. Yet, perceived physical strength exerted an additional and more reactive influence on perceived weight independently of measured expectancies. Results are used to clarify the nature of "embodied", internal sensory-motor representations of physical and social properties.  相似文献   

9.
Differential vocal emphasis in the tape-recorded instruction reading for a standard person perception task was manipulated by mechanically raising or lowering the volume of the key words describing the success or failure response alternatives on the rating scale. In a series of three experiments, Ss exposed to success emphasis in the instructions rated the stimulus persons as more successful than did Ss exposed to failure emphasis. This trend was reversed for Ss who listened twice to the instructions. None of the Ss reported awareness of the influence attempt.  相似文献   

10.
Results are reported for an experiment which examined the influence of listener perception of speaker intention on sentence recognition. Given the same passage and recognition sentences, subjects displayed different false recognition patterns of test items depending on which of two speakers with opposing viewpoints the passage was attributed to. It is argued that the reconstructive process of memory is based on information from the context (e.g., the speaker's perceived intentions) as well as on the actual words used. Retention of different aspects of a meassage is seen to rely on information from different sources. Specifically, the results of the study indicate that retention of meaning involving the speaker's predictions, opinions, etc., is influenced by the listener's perception of the speaker.  相似文献   

11.
The effects of variation in a speaker's voice and temporal phoneme location were assessed through a series of speeded classification experiments. Listeners monitored speech syllables for target consonants or vowels. The results showed that speaker variability and phoneme-location variability had detrimental effects on classification latencies for target sounds. In addition, an interaction between variables showed that the speaker variability effect was obtained only when temporal phoneme location was fixed across trials. A subadditive decrement in latencies produced by the interaction of the two variables was also obtained, suggesting that perceptual loads may not affect perceptual adjustments to a speaker's voice in the same way that memory loads do.  相似文献   

12.
13.
The present study simultaneously assessed the relative contributions of feedback indicative of comprehension and the apparent age of the listener, either an adult or a doll which resembled a toddler, in a 2 (listeners) × 2 (types of feedback, C = comprehension, NC = noncomprehension) design. Two groups of children, a 3-year-old (N = 13, 7 boys, 6 girls) and a 5-year-old group (N = 12, 6 boys, 6 girls) were asked to tell stories to both the adult and doll in both C and NC conditions. The doll was constructed with an internal speaker such that it could actually carry on a conversation with the children. The conversations were taped, transcribed, and scored for mean length of utterance (MLU), transitional utterance length to each C and NC signal, and the proportion of child questions, exact self-repetitions, repetition and reductions, and rephrases/elaborations. The data analysis revealed that all children appropriately modified the length of their utterances (MLU) in the doll condition but not in the adult condition, indicating that they were sensitive to both the feedback and the nature of their listeners. Older children were more likely than younger children, and girls more likely than boys to adjust the length of their utterances appropriately to each type of feedback, slightly increasing the length of the subsequent utterance to a C signal and decreasing the length to an NC signal. The younger children were also more likely to respond with a simple repetition to NC cues from the adult.  相似文献   

14.
Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.  相似文献   

15.
16.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   

17.
In an investigation of the effects of simulated stuttering on listener recall, a presentation was varied on two factors: degree of stuttering (mild or severe) and information value of stuttered words (low or high). A control presentation featuring non-stuttered speech also was prepared. Five groups of 16 subjects were randomly assigned to, and participated in, one of the five listening conditions. Then they completed a 20-item recall test. A one-way analysis of variance revealed sognificant differences among the five conditions. Two-way analysis of variance disclosed no main effects. However, a significant interaction showed that recall was lowest in the severe stuttering-high information condition. The results are discussed in terms of attention to critical information.  相似文献   

18.
The TRACE model of speech perception (McClelland & Elman, 1986) is contrasted with a fuzzy logical model of perception (FLMP) (Oden & Massaro, 1978). The central question is how the models account for the influence of multiple sources of information on perceptual judgment. Although the two models can make somewhat similar predictions, the assumptions underlying the models are fundamentally different. The TRACE model is built around the concept of interactive activation, whereas the FLMP is structured in terms of the integration of independent sources of information. The models are tested against test results of an experiment involving the independent manipulation of bottom-up and top-down sources of information. Using a signal detection framework, sensitivity and bias measures of performance can be computed. The TRACE model predicts that top-down influences from the word level influence sensitivity at the phoneme level, whereas the FLMP does not. The empirical results of a study involving the influence of phonological context and segmental information on the perceptual recognition of a speech segment are best described without any assumed changes in sensitivity. To date, not only is a mechanism of interactive activation not necessary to describe speech perception, it is shown to be wrong when instantiated in the TRACE model.  相似文献   

19.
The nature of acoustic memory and its relationship to the categorizing process in speech perception is investigated in three experiments on the serial recall of lists of syllables. The first study confirms previous reports that sequences comprising the syllables, bah, dah, and gah show neither enhanced retention when presented auditorily rather than visually, nor a recency effect—both occurred with sequences in which vowel sounds differed (bee, bih, boo). This was found not to be a simple vowel-consonant difference since acoustic memory effects did occur with consonant sequences that were acoustically more discriminable (sha, ma, ga and ash, am, ag). Further experiments used the stimulus suffix effect to provide evidence of acoustic memory, and showed (1), increasing the acoustic similarity of the set grossly impairs acoustic memory effects for vowels as well as consonants, and (2) such memory effects are no greater for steady-state vowels than for continuously changing diphthongs. It is concluded that the usefulness of the information that can be retrieved from acoustic memory depends on the acoustic similarity of the items in the list rather than on their phonetic class or whether or not they have “encoded” acoustic cues. These results question whether there is any psychological evidence for “encoded” speech sounds being categorized in ways different from other speech sounds.  相似文献   

20.
Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N?=?163) aged 19–34 years and 60–85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号