首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

2.
Three selective adaptation experiments were run, using nonspeech stimuli (music and noise) to adapt speech continua ([ba]-[wa] and [cha]-[sha]). The adaptors caused significant phoneme boundary shifts on the speech continua only when they matched in periodicity: Music stimuli adapted [ba]-[wa], whereas noise stimuli adapted [cha]-[sha]. However, such effects occurred even when the adaptors and test continua did not match in other simple acoustic cues (rise time or consonant duration). Spectral overlap of adaptors and test items was also found to be unnecessary for adaptation. The data support the existence of auditory processors sensitive to complex acoustic cues, as well as units that respond to more abstract properties. The latter are probably at a level previously thought to be phonetic. Asymmetrical adaptation was observed, arguing against an opponent-process arrangement of these units. A two-level acoustic model of the speech perception process is offered to account for the data.  相似文献   

3.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

4.
A contingent adaptation effect is reported for speech perception. Experiments were conducted to test the effects of an alternating sequence of two adapting syllables, [da] and [thi], on the perception of two series of synthetic speech syllables, [ba]-[pha] and [bi]-[phi]. Each of the test series consisted of 11 stimuli varying in voice onset time, a cue which distinguishes voiced from voiceless stop consonants in word-initial position. The [da]-[thi] adapting sequence produced opposite shifts in the loci of the phonetic boundaries for the two test series. For the [ba]-[pha] series, listeners made fewer identification responses to the [b] category after adaptation, while for the [bi]-[phi] series, listeners made more responses to the [b] category. The opposing shifts indicate that the perceptual analysis of voicing in stop consonants is carried out with respect to vowel environment.  相似文献   

5.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

6.
Acoustic cues for the perception of place of articulation in aphasia   总被引:1,自引:0,他引:1  
Two experiments assessed the abilities of aphasic patients and nonaphasic controls to perceive place of articulation in stop consonants. Experiment I explored labeling and discrimination of [ba, da, ga] continua varying in formant transitions with or without an appropriate burst onset appended to the transitions. Results showed general difficulty in perceiving place of articulation for the aphasic patients. Regardless of diagnostic category or auditory language comprehension score, discrimination ability was independent of labeling ability, and discrimination functions were similar to normals even in the context of failure to reliably label the stimuli. Further there was less variability in performance for stimuli with bursts than without bursts. Experiment II measured the effects of lengthening the formant transitions on perception of place of articulation in stop consonants and on the perception of auditory analogs to the speech stimuli. Lengthening the transitions failed to improve performance for either the speech or nonspeech stimuli, and in some cases, reduced performance level. No correlation was observed between the patient's ability to perceive the speech and nonspeech stimuli.  相似文献   

7.
We investigated the conditions under which the [b]-[w] contrast is processed in a contextdependent manner, specifically in relation to syllable duration. In an earlier paper, Miller and Liberman (1979) demonstrated that when listeners use transition duration to differentiate [b] from [w], they treat it in relation to the duration of the syllable: As syllables from a [ba]-[wa] series varying in transition duration become longer, so, too, does the transition duration at the [b]-[w] perceptual boundary. In a subsequent paper, Shinn, Blumstein, and Jongman (1985) questioned the generality of this finding by showing that the effect of syllable duration is eliminated for [ba]-[wa] stimuli that are less schematicthan those used by Miller and Liberman. In the present investigation, we demonstrated that when these “more natural” stimuli are presented in a multitalker babble noise instead of in quiet (as was done by Shinn et al.), the syllable-duration effect emerges. Our findings suggest that the syllable-duration effect in particular, and context effects in general, may play a more important role in speech perception than Shinn et al. suggested.  相似文献   

8.
杨婉晴  肖容  梁丹丹 《心理学报》2020,52(6):730-741
以MMN、p-MMR作为汉语词汇声调感知的神经关联物,探究2~4岁汉语普通话儿童前注意阶段对声调刺激的失匹配响应,关注范畴信息和偏差大小两个因素对儿童感知水平的影响。结果显示:在范畴间大偏差条件下(T1/T3),诱发明显的MMN;在范畴内或小偏差条件下(T3a/T3、T3b/T3、T2/T3),都未诱发显著的MMR。表明:2~4岁普通话儿童正处于声调感知能力的发展过程中,音位信息和声学信息共同影响其声调感知。  相似文献   

9.
Gauthier B  Shi R  Xu Y 《Cognition》2007,103(1):80-106
We explore in this study how infants may derive phonetic categories from adult input that are highly variable. Neural networks in the form of self-organizing maps (SOMs; ) were used to simulate unsupervised learning of Mandarin tones. In Simulation 1, we trained the SOMs with syllable-sized continuous F(0) contours, produced by multiple speakers in connected speech, and with the corresponding velocity profiles (D1). No attempt was made to reduce the large amount of variability in the input or to add to the input any abstract features such as height and slope of the F(0) contours. In the testing phase, reasonably high categorization rate was achieved with F(0) profiles, but D1 profiles yielded almost perfect categorization of the four tones. Close inspection of the learned prototypical D1 profile clusters revealed that they had effectively eliminated surface variability and directly reflected articulatory movements toward the underlying targets of the four tones as proposed by . Additional simulations indicated that a further learning step was possible through which D1 prototypes with one-to-one correspondence to the tones were derived from the prototype clusters learned in Simulation 1. Implications of these findings for theories of language acquisition, speech perception and speech production are discussed.  相似文献   

10.
本研究考察了50~80岁说普通话的中老年人对普通话声调T2—T4的范畴化感知表现,探究影响声调范畴化感知老化的因素。采用经典范畴感知范式。结果显示,(1)中老年组所有年龄段(50~60岁、60~70岁、70~80岁)的范畴边界宽度都显著大于年轻组,但中老年组间差异不显著。(2)中老年人范畴边界宽度与记忆广度测试得分呈显著负相关,而与年龄的相关性不显著。(3)和年轻组相比,中老年组范畴内识别函数的斜率差异显著,而范畴间差异不显著。结果表明,中老年人声调感知范畴化程度下降,音系层面的加工能力发生衰退,记忆广度的衰退与声调范畴化感知老年化之间存在关联。此外,50到80岁间,年龄不会直接影响声调感知范畴化程度。  相似文献   

11.
Lim SJ  Holt LL 《Cognitive Science》2011,35(7):1390-1405
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.  相似文献   

12.
Certain attributes of a syllable-final liquid can influence the perceived place of articulation of a following stop consonant. To demonstrate this perceptual context effect, the CV portions of natural tokens of [al-da], [al-ga], [ar-da], [ar-ga] were excised and replaced with closely matched synthetic stimuli drawn from a [da]-[ga] continuum. The resulting hybrid disyllables were then presented to listeners who labeled both liquids and stops. The natural CV portions had two different effects on perception of the synthetic CVs. First, there was an effect of liquid category: Listeners perceived “g” more often in the context of [al] than in that of [ar]. Second, there was an effect due to tokens of [al] and [ar] having been produced before [da] or [ga]: More “g” percepts occurred when stops followed liquids that had been produced before [g]. A hypothesis that each of these perceptual effects finds a parallel in speech production is supported by spectrograms of the original utterances. Here, it seems, is another instance in which findings in speech perception reflect compensation for coarticulation during speech production.  相似文献   

13.
We examined categorical speech perception in school‐age children with developmental dyslexia or Specific Language Impairment (SLI), compared to age‐matched and younger controls. Stimuli consisted of synthetic speech tokens in which place of articulation varied from ‘b’ to ‘d’. Children were tested on categorization, categorization in noise, and discrimination. Phonological awareness skills were also assessed to examine whether these correlated with speech perception measures. We observed similarly good baseline categorization rates across all groups; however, when noise was added, the SLI group showed impaired categorization relative to controls, whereas dyslexic children showed an intact profile. The SLI group showed poorer than expected between‐category discrimination rates, whereas this pattern was only marginal in the dyslexic group. Impaired phonological awareness profiles were observed in both the SLI and dyslexic groups; however, correlations between phonological awareness and speech perception scores were not significant. The results of the study suggest that in children with language and reading impairments, there is a significant relationship between receptive language and speech perception, there is at best a weak relationship between reading and speech perception, and indeed the relationship between phonological and speech perception deficits is highly complex.  相似文献   

14.
Working memory uses central sound representations as an informational basis. The central sound representation is the temporally and feature-integrated mental representation that corresponds to phenomenal perception. It is used in (higher-order) mental operations and stored in long-term memory. In the bottom-up processing path, the central sound representation can be probed at the level of auditory sensory memory with the mismatch negativity (MMN) of the event-related potential. The present paper reviews a newly developed MMN paradigm to tap into the processing of speech sound representations. Preattentive vowel categorization based on F1-F2 formant information occurs in speech sounds and complex tones even under conditions of high variability of the auditory input. However, an additional experiment demonstrated the limits of the preattentive categorization of language-relevant information. It tested whether the system categorizes complex tones containing the F1 and F2 formant components of the vowel /a/ differently than six sounds with nonlanguage-like F1-F2 combinations. From the absence of an MMN in this experiment, it is concluded that no adequate vowel representation was constructed. This shows limitations of the capability of preattentive vowel categorization.  相似文献   

15.
Cognitive systems face a tension between stability and plasticity. The maintenance of long-term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short-term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom-up acoustic information or top-down word knowledge is available to disambiguate speech input, there is short-term adaptive plasticity such that subsequent speech perception is shifted even in the absence of the disambiguating information. Although such effects are well-documented, it is not yet known whether bottom-up and top-down resolution of ambiguity may operate through common processes, or how these information sources may interact in guiding the adaptive plasticity of speech perception. The present study investigates the joint contributions of bottom-up information from the acoustic signal and top-down information from lexical knowledge in the adaptive plasticity of speech categorization according to short-term input regularities. The results implicate speech category activation, whether from top-down or bottom-up sources, in driving rapid adjustment of listeners' reliance on acoustic dimensions in speech categorization. Broadly, this pattern of perception is consistent with dynamic mapping of input to category representations that is flexibly tuned according to interactive processing accommodating both lexical knowledge and idiosyncrasies of the acoustic input.  相似文献   

16.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).  相似文献   

17.
席洁  姜薇  张林军  舒华 《心理学报》2009,41(7):572-579
范畴性是言语知觉的一个显著特征,长期以来一直受到研究者的普遍关注。但汉语不同语音特征的范畴性知觉特点及其发展模式迄今为止还很少有研究涉及。本研究利用语音合成的方法分别改变辅音的送气/不送气特征和声调的基频曲线,生成语音刺激连续体,采用范畴性知觉的经典研究范式探讨了汉语正常成人被试嗓音启动时间(Voice onset time,VOT)和声调范畴性知觉的特点及不同年龄儿童的发展模式。研究结果表明:(1)成人被试对VOT和声调的知觉是范畴性的;(2)对于汉语声调特征,6岁儿童已经具有类似成人的范畴性知觉能力;而在VOT这一维度上,范畴知觉能力随着年龄发展不断精细化,但7岁儿童也尚未达到成年人的敏感程度,说明汉语VOT和声调这两个不同的语音特征经历了不同的发展模式。  相似文献   

18.
Effect of lexical status on phonetic categorization   总被引:2,自引:0,他引:2  
To investigate the interaction in speech perception between lexical knowledge (in particular, whether a stimulus token makes a word or nonword) and phonetic categorization, sets of [bVC]-[dVC] place-of-articulation continua were constructed so that the endpoint tokens represented word-word, word-nonword, nonword-word, and nonword-nonword combinations. Experiment 1 demonstrated that ambiguous tokens were perceived in favor of the word token and supported the contention that lexical knowledge can affect the process of phonetic categorization. Experiment 2 utilized a reaction time procedure with the same stimuli and demonstrated that the effect of lexical status on phonetic categorization increased with response latency, suggesting that the lexical effect represents a perceptual process that is separate from and follows phonetic categorization. Experiment 3 utilized a different set of [b-d] continua to separate the effects of final consonant contrast and lexical status that were confounded in Experiments 1 and 2. Results demonstrated that both lexical status and contextual contrast separately affected the identification of the initial stop. Data from these three experiments support a perceptual model wherein phonetic categorization can operate separately from higher levels of analysis.  相似文献   

19.
Baum SR 《Brain and language》2001,76(3):266-281
Two experiments examined the influence of context on stop-consonant voicing identification in fluent and nonfluent aphasic patients and normal controls. Listeners were required to label the initial stop in a target word varying along a voice onset time (VOT) continuum as either voiced or voiceless ([b]/[p] or [d]/[t]). Target stimuli were presented in sentence contexts in which the rate of speech of the sentence context (Experiment 1) or the semantic bias of the context (Experiment 2) was manipulated. The results revealed that all subject groups were sensitive to the contextual influences, although the extent of the context effects varied somewhat across groups and across experiments. In addition, a number of patients in both the fluent and nonfluent aphasic groups could not consistently identify even endpoint stimuli, confirming phonetic categorization impairments previously shown in such individuals. Results are discussed with respect to the potential reliance by aphasic patients on higher level context to compensate for phonetic perception deficits.  相似文献   

20.
A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' "once (s)pear," [t] in 'nooit (t)rap,' "never staircase/quick") were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号