首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   0篇
  11篇
  2014年   1篇
  2002年   1篇
  1991年   1篇
  1984年   2篇
  1982年   1篇
  1980年   1篇
  1978年   1篇
  1977年   1篇
  1975年   2篇
排序方式: 共有11条查询结果,搜索用时 0 毫秒
1.
Smiling is a universally recognized visible expression of happiness. A side effect of smiling is an alteration of the vocal tract, suggesting that during vocalization smiling may be heard as well as seen. The present studies investigated this hypothesis. Smiled and straight-faced tokens of 29 utterances were collected from six speakers, and presented to three groups of 12 subjects for forced-choice identification. Groups 1 and 2, instructed to select the. smiled or happier sound, respectively, performed significantly better than chance for all speakers. Group 3, asked to select the sadder sound, chose the straight-faced token reliably :For four speakers, but for one also picked the smiled token. Acoustic analyses showed that smiling raised the fundamental and formant frequencies for all speakers and amplitude and/or duration for some. Particular cue combinations appear to be heard as smiling specifically, whereas others are associated with emotionality in general.  相似文献   
2.
Identifiability of vowels and speakers from whispered syllables   总被引:4,自引:0,他引:4  
In the present experiments, the effect of whisper register on speech perception was measured. We assessed listeners' abilities to identify 10 vowels in [hVd] context pronounced by 3 male and 3 female speakers in normal and whisper registers. Results showed 82% average identification accuracy in whisper mode, approximately a 10% falloff in identification accuracy from normally phonated speech. In both modes, significant confusions of [o] for [a] occurred, with some additional significant confusions occurring in whisper mode among vowels adjacent in F1/F2 space. We also assessed listeners' abilities to match whispered syllables with normally phonated ones by the same speaker. Each trial contained the matching syllable and two foils whispered by speakers of the same sex as the speaker of the target. Identification performance was significantly better than chance across subjects, speakers, and vowels, with no listener achieving better than 96% performance. Acoustic analyses measured potential cues to speaker identity independent of register.  相似文献   
3.
This study addresses a central question in perception of novel figurative language: whether it is interpreted intelligently and figuratively immediately, or only after a literal interpretation fails. Eighty sentence frames that could plausibly end with a literal, truly anomalous, or figurative word were created. After validation for meaningfulness and figurativeness, the 240 sentences were presented to 11 subjects for event related potential (ERP) recording. ERP's first 200 ms is believed to reflect the structuring of the input; the prominence of a dip at around 400 ms (N400) is said to relate inversely to how expected a word is. Results showed no difference between anomalous and metaphoric ERPs in the early window, metaphoric and literal ERPs converging 300-500 ms after the ending, and significant N400s only for anomalous endings. A follow-up study showed that the metaphoric endings were less frequent (in standardized word norms) than were the anomalous and literal endings and that there were significant differences in cloze probabilities (determined from 24 new subjects) among the three ending types: literal > metaphoric > anomalous. It is possible that the low frequency of the metaphoric element and lower cloze probability of the anomalous one contributed to the processes reflected in the early window, while the incongruity and near-zero cloze probability of the anomalous endings produced an N400 effect in them alone. The structure or parse derived for metaphor during the early window appears to yield a preliminary interpretation suggesting anomaly, while semantic analysis reflected in the later window renders a plausible figurative interpretation.  相似文献   
4.
5.
A dichotic listening experiment was conducted to determine if vowel perception is based on phonetic feature extraction as is consonant perception. Twenty normal right-handed subjects were given dichotic CV syllables contrasting in final vowels. It was found that, unlike consonants, the perception of dichotic vowels was not significantly lateralized, that the dichotic perception of vowels was not significantly enhanced by the number of phonetic features shared, and that the occurrence of double-blend errors was not greater than chance. However, there was strong evidence for the use of phonetic features at the level of response organization. It is suggested that the differences between vowel and consonant perception reflect the differential availability of the underlying acoustic information from auditory store, rather than differences in processing mechanisms.  相似文献   
6.
Acoustic cues for the perception of place of articulation in aphasia   总被引:1,自引:0,他引:1  
Two experiments assessed the abilities of aphasic patients and nonaphasic controls to perceive place of articulation in stop consonants. Experiment I explored labeling and discrimination of [ba, da, ga] continua varying in formant transitions with or without an appropriate burst onset appended to the transitions. Results showed general difficulty in perceiving place of articulation for the aphasic patients. Regardless of diagnostic category or auditory language comprehension score, discrimination ability was independent of labeling ability, and discrimination functions were similar to normals even in the context of failure to reliably label the stimuli. Further there was less variability in performance for stimuli with bursts than without bursts. Experiment II measured the effects of lengthening the formant transitions on perception of place of articulation in stop consonants and on the perception of auditory analogs to the speech stimuli. Lengthening the transitions failed to improve performance for either the speech or nonspeech stimuli, and in some cases, reduced performance level. No correlation was observed between the patient's ability to perceive the speech and nonspeech stimuli.  相似文献   
7.
Two dichotic listening experiments assess the lateralization of speaker identification in right-handed native English speakers. Stimuli were tokens of /ba/, /da/, /pa/, and /ta/ pronounced by two male and two female speakers. In Experiment 1, subjects identified either the two consonants in dichotic stimuli spoken by the same person, or identified two speakers in dichotic tokens of the same syllable. In Experiment 2 new subjects identified the two consonants or the two speakers in pairs in which both consonant and speaker distinguished the pair members. Both experiments yielded significant right-ear advantages for consonant identification and nonsignificant ear differences for speaker identification. Fewer errors were made for speaker judgments than for consonant judgments, and for speaker judgments for pairs in which the speakers were of the same sex than for pairs in which speaker sex differed. It is concluded that, as in vowel identification, neither hemisphere clearly dominates in dichotic speaker identification, perhaps because of minor information loss in the ipsilateral pathways.  相似文献   
8.
9.
Attention, Perception, & Psychophysics - Two adaptation experiments were conducted to determine some of the sufficient acoustic properties for excitation of the feature detectors underlying...  相似文献   
10.
Perceptual confusions within 36 formationally minimal pairs of signs were assessed for native signers under two conditions of video presentation: (1) normally lighted black-and-white displays and (2) point-light displays constructed by affixing 26 points of retroreflective tape on the fingertips, back knuckles, and wrists and 1 point on the nose of the signer. Nine pairs were selected for each of the formational parameters of handshape, location, movement, and orientation. For each minimal pair, a native signer constructed and signed an ASL sentence that was syntactically and semantically appropriate for both members of the pair. Fourteen highly fluent ASL users responded by selecting a picture appropriate to the viewed sentence, thus avoiding contamination by English. For both viewing conditions, subjects discriminated the minimal pairs significantly better than chance. Performance was better (1) when location or orientation differentiated the signs than when movement or handshape did and (2) in normal lighting than in the point-light displays. With the point/light displays, discrimination of location, movement, and orientation was poorer and handshape discrimination was at chance. The discussion considers (1) the confusion of signs in the absence of linguistic redundancy, (2) the effectivenese of the point-light configurations as “minimal cues” for the contrasts, and (3) the efficacy of using these point-light configurations to reduce the information in ASL to a narrow bandwidth.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号