首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
刘文理  周详  张明亮 《心理学报》2016,(9):1057-1069
以汉语塞-元音节(/pa/、/pi/和/pu/)及其声学线索的非言语模拟音为语境音,汉语/ta/-/ka/连续体为目标音,在3个实验中考察了语境音对汉语听者识别目标音的影响及潜在机制。实验1发现3个塞-元音节的语境效应与频谱对比效应的预期部分一致,与发音特征理论的预期相矛盾。实验2和实验3发现3个塞-元音节关键声学线索(第二共振峰轨迹)的非言语模拟音和所有共振峰轨迹的非言语模拟音表现出类似的语境效应,二者与3个塞-元音节的语境效应总体上一致,但细节方面存在一些差别。实验结果表明塞-元-塞音序列的语境效应主要源于语境音节关键声学线索的差异,这为听觉理论提供了支持,但语境音的语音范畴也部分影响到3个音节的语境效应模式。另外实验结果发现远离目标音关键声学线索频率区的语境音也促进了特定语音范畴的识别,可能由于语境音激活了相应语音范畴的声学线索。  相似文献   

2.
塞辅音和声调知觉问题   总被引:2,自引:0,他引:2  
本文报告的两个实验以合成的汉语普通话CV音节作刺激,研究了塞辅音与声调之间在知觉上的相互作用问题。主要结果是:(1)塞辅音的发音方式影响声调的知觉,不送气音使听者在辨别声调时倾向于基频曲线起点高的声调反应;(2)音节的声调也影响对塞音发音方式的判断,在一、四声音节里,听者倾向于将塞辅音听成不送气音,在二、三声音节里,听成送气音。  相似文献   

3.
采用启动范式,在三个实验中通过操纵启动音和目标音的频谱相似度和时间间隔,考察了汉语听者元音范畴知觉中特征分析和整合的时间进程。结果发现随着启动音(从纯音、复合音到目标元音本身)和目标元音频谱相似度的增加,启动效应延续的时间越来越长。实验结果支持语音范畴知觉存在早期的声学特征分析和整合到后期的范畴知觉阶段,并为这些加工阶段的时间进程提供了初步的证据。  相似文献   

4.
语句中协同发音对音节知觉的影响   总被引:5,自引:0,他引:5  
采用音节相似度听辨实验,以大学生为被试,对音节间协同发音现象所引起的音节知觉差异进行了考察,结果发现:音节之间的协同发音影响音节音段内容的变化,而这种变化主要取决于相邻前音节末尾韵母部分的差异,以及相邻后音节首辅音发音部位的差异;对于发音相同、语境不同的音节,音节在超音段内容变化所引起的知觉效应明显大于协同发音引起的知觉效应  相似文献   

5.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

6.
婴儿期言语知觉研究表明婴儿最初(1~4个月)可以分辨几乎所有的语音范畴对比;随母语经验增加,婴儿言语知觉逐渐表现出母语语音特征的影响,辅音知觉表现为对母语语音范畴界限敏感性的提高和对非母语范畴界限敏感性的下降,非母语范畴开始同化到母语音系中去,母语元音知觉表现出知觉磁体效应。这些证据表明婴儿逐渐习得母语音位范畴,音位范畴习得顺序可能依赖范畴例子本身的声学特征、发生频率等因素  相似文献   

7.
语音范畴知觉是指听者能够区分不同音位范畴的刺激而不能区分同一范畴内的不同刺激。声调知觉的范畴化程度可能与刺激的声学相似度有关,相似度越高则知觉的范畴化程度越低。除了声调本身的特征外,影响声调知觉范畴化的因素还包括母语背景、年龄、刺激所在语境和刺激属性(语言和非语言)。电生理技术的使用加深了声调范畴知觉的研究,并有助于解决一些长期存在争议的理论问题。  相似文献   

8.
汉字识别中形音义激活时间进程的研究(Ⅰ)   总被引:19,自引:4,他引:15  
分别采用基于语义的和基于语音的启动范畴判断作业,在不同的SOA条件下,考察高频汉字识别中形音义信息激活的相对时间进程。两个实验的结果表明,高频汉字形音义激活的时序为字形一字义一字音的顺序。这一结果揭示,高频汉字字义的提取更符合直通理论的预期。高频汉字的语音是自动激活的,但语音的激活可能发生在字义通达之后。  相似文献   

9.
曲折  丁玉珑 《心理学报》2010,42(2):193-199
应用DRM范式, 研究单个汉字语音关联对错误记忆的影响。实验一中, 学习阶段呈现的各字表汉字与未呈现的诱饵字具有相同音节。再认测试发现, 被试对诱饵字产生了明显的错误再认。实验二发现, 当各学习字表内的汉字拥有相同声母或相同韵母时, 被试对相应的两种诱饵字也能产生明显的错误再认, 并且其效应与音节相同时基本一样。这些结果说明, 汉字的语音关联可以诱发错误记忆现象, 但其效应并不随语音相似度的增加而增加。研究提示, 错误记忆可以来源于较低层次的基于知觉的加工; 中文语音网络虽然易于激活, 但激活程度有限。  相似文献   

10.
汉字识别中形音义激活时间进程的研究(Ⅰ)   总被引:6,自引:1,他引:5  
分别采用基于语义的和基于语音的启动范畴判断作业,在不同的SOA条件下,考察高频汉字识别中形音义信息激活的相对时间进程。两个实验的结果表明,高频汉字形音义激活的时序为字形—字义—字音的顺序。这一结果揭示,高频汉字字义的提取更符合直通理论的预期。高频汉字的语音是自动激活的,但语音的激活可能发生在字义通达之后。  相似文献   

11.
Experiments were conducted investigating unimodal and cross-modal phonetic context effects on /r/ and /l/ identifications to test a hypothesis that context effects arise in early auditory speech processing. Experiment 1 demonstrated an influence of a preceding bilabial stop consonant on the acoustic realization of /r/ and /l/ produced within the stop clusters /ibri/ and /ibli/. In Experiment 2, members of an acoustic /iri/ to /ili/ continuum were paired with an acoustic /ibi/. These dichotic tokens were associated with an increase in "l" identification relative to the /iri/ to /ili/ continuum. In Experiment 3, the /iri/ to /ili/ tokens were dubbed onto a video of a talker saying /ibi/. This condition was associated with a reliable perceptual shift relative to an auditory-only condition in which the /iri/ to /ili/ tokens were presented by themselves, ruling out an account of these context effects as arising during early auditory processing.  相似文献   

12.
It is well known that the formant transitions of stop consonants in CV and VC syllables are roughly the mirror image of each other in time. These formant motions reflect the acoustic correlates of the articulators as they move rapidly into and out of the period of stop closure. Although acoustically different, these formant transitions are correlated perceptually with similar phonetic segments. Earlier research of Klatt and Shattuck (1975) had suggested that mirror image acoustic patterns resembling formant transitions were not perceived as similar. However, mirror image patterns could still have some underlying similarity which might facilitate learning, recognition, and the establishment of perceptual constancy of phonetic segments across syllable positions. This paper reports the results of four experiments designed to study the perceptual similarity of mirror-image acoustic patterns resembling the formant transitions and steady-state segments of the CV and VC syllables /ba/, /da/, /ab/, and /ad/. Using a perceptual learning paradigm, we found that subjects could learn to assign mirror-image acoustic patterns to arbitrary response categories more consistently than they could do so with similar arrangements of the same patterns based on spectrotemporal commonalities. Subjects respond not only to the individual components or dimensions of these acoustic patterns, but also process entire patterns and make use of the patterns’ internal organization in learning to categorize them consistently according to different classification rules.  相似文献   

13.
The “McGurk effect” demonstrates that visual (lip-read) information is used during speech perception even when it is discrepant with auditory information. While this has been established as a robust effect in subjects from Western cultures, our own earlier results had suggested that Japanese subjects use visual information much less than American subjects do (Sekiyama & Tohkura, 1993). The present study examined whether Chinese subjects would also show a reduced McGurk effect due to their cultural similarities with the Japanese. The subjects were 14 native speakers of Chinese living in Japan. Stimuli consisted of 10 syllables (/ba/, /pa/, /ma/, /wa/, /da/, /ta/, /na/, /ga/, /ka/, /ra/ ) pronounced by two speakers, one Japanese and one American. Each auditory syllable was dubbed onto every visual syllable within one speaker, resulting in 100 audiovisual stimuli in each language. The subjects’ main task was to report what they thought they had heard while looking at and listening to the speaker while the stimuli were being uttered. Compared with previous results obtained with American subjects, the Chinese subjects showed a weaker McGurk effect. The results also showed that the magnitude of the McGurk effect depends on the length of time the Chinese subjects had lived in Japan. Factors that foster and alter the Chinese subjects’ reliance on auditory information are discussed.  相似文献   

14.
This three-part study demonstrates that perceptual order can influence the integration of acoustic speech cues. In Experiment 1, the subjects labeled the [s] and [sh] in natural FV and VF syllables in which the frication was replaced with synthetic stimuli. Responses to these "hybrid" stimuli were influenced by cues in the vocalic segment as well as by the synthetic frication. However, the influence of the preceding vocalic cues was considerably weaker than was that of the following vocalic cues. Experiment 2 examined the acoustic bases for this asymmetry and consisted of analyses revealing that FV and VF syllables are similar in terms of the acoustic structures thought to underlie the vocalic context effects. Experiment 3 examined the perceptual bases for the asymmetry. A subset of the hybrid FV and VF stimuli were presented in reverse, such that the acoustic and perceptual bases for the asymmetry were pitted against each other in the listening task. The perceptual bases (i.e., the perceived order of the frication and vocalic cues) proved to be the determining factor. Current auditory processing models, such as backward recognition masking, preperceptual auditory storage, or models based on linguistic factors, do not adequately account for the observed asymmetries.  相似文献   

15.
This three-part study demonstrates that perceptual order can influence the integration of acoustic speech cues. In Experiment 1, the subjects labeled the [s] and [∫] in natural FV and VF syllables in which the frication was replaced with synthetic stimuli. Responses to these “hybrid” stimuli were influenced by cues in the vocalic segment as well as by the synthetic frication. However, the influence of the preceding vocalic cues was considerably weaker than was that of the following vocalic cues. Experiment 2 examined the acoustic bases for this asymmetry and consisted of analyses revealing that FV and VF syllables are similar in terms of the acoustic structures thought to underlie the vocalic context effects. Experiment 3 examined the perceptual bases for the asymmetry. A subset of the hybrid FV and VF stimuli were presented inreverse, such that the acoustic and perceptual bases for the asymmetry were pitted against each other in the listening task. The perceptual bases (i.e., the perceived order of the frication and vocalic cues) proved to be the determining factor. Current auditory processing models, such as backward recognition masking, preperceptual auditory storage, or models based on linguistic factors, do not adequately account for the observed asymmetries.  相似文献   

16.
Same-different reaction times (RTs) were obtained to pairs of synthetic speech sounds ranging perceptually from /ba/ through /pa/. Listeners responded “same” if both stimuli in a pair were the same phonetic segments (i.e., /ba/-/ba/ or /pa/-/pa/) or “different” if both stimuli were different phonetic segments (i.e., /ba/-/pa/ or /pa/-/ba/). RT for “same” responses was faster to pairs of acoustically identical stimuli (A-A) than to pairs of acoustically different stimuli (A-a) belonging to the same phonetic category. RT for “different” responses was faster for large acoustic differences across a phonetic boundary than for smaller acoustic differences across a phonetic boundary. The results suggest that acoustic information for stop consonants is available to listeners, although the retrieval of this information in discrimination will depend on the level of processing accessed by the particular information processing task.  相似文献   

17.
This study investigated whether consonant phonetic features or consonant acoustic properties more appropriately describe perceptual confusions among speech stimuli in multitalker babble backgrounds. Ten normal-hearing subjects identified 19 consonants, each paired with /a/, 1–19 and lui in a CV format. The stimuli were presented in quiet and in three levels of babble. Multidimensional scaling analyses of the confusion data retrieved stimulus dimensions corresponding to consonant acoustic parameters. The acoustic dimensions identified were: periodicity/burst onset, friction duration, consonant-vowel ratio, second formant transition slope, and first formant transition onset. These findings are comparable to previous reports of acoustic effects observed in white-noise conditions, and support the theory that acoustic characteristics are the relevant perceptual properties of speech in noise conditions. Perceptual effects of vowel context and level of the babble also were observed. These condition effects contrast with those previously reported for white-noise interference, and are attributed to direct masking of the low-frequency acoustic cues in the nonsense syllables by the low-frequency spectrum of the babble.  相似文献   

18.
Two new experimental operations were used to distinguish between auditory and phonetic levels of processing in speech perception: the first based on reaction time data in speeded classification tasks with synthetic speech stimuli, and the second based on average evoked potentials recorded concurrently in the same tasks. Each of four experiments compared the processing of two different dimensions of the same synthetic consonant-vowel syllables. When a phonetic dimensions was compared to an auditory dimension, different patterns of results were obtained in both the reaction time and evoked potential data. No such differences were obtained for isolated acoustic components of the phonetic dimension or for two purely auditory dimensions. Together with other recent evidence, the present results constitute additional converging operations on the distinction between auditory and phonetic processes in speech perception and on the idea that phonetic processing involves mechanisms that are lateralized in one cerebral hemisphere.  相似文献   

19.
This study investigated the acoustic correlates of perceptual centers (p-centers) in CV and VC syllables and developed an acoustic p-center model. In Part 1, listeners located syllables’ p-centers by a method-of-adjustment procedure. The CV syllables contained the consonants /?/, /r/, /n /, /t/, /d/, /k/, and /g/; the VCs, the consonants /?/, /r/, and /n/. The vowel in all syllables was /a/. The results of this experiment replicated and extended previous findings regarding the effects of phonetic variation on p-centers. In Part 2, a digital signal processing procedure was used to acoustically model p-center perception. Each stimulus was passed through a six-band digital filter, and the outputs were processed to derive low-frequency modulation components. These components were weighted according to a perceived modulation magnitude function and recombined to create sixpsychoacoustic envelopes containing modulation energies from 3 to 47 Hz. In this analysis, p-centers were found to be highly correlated with the time-weighted function of the rate-of-change in the psychoacoustic envelopes, multiplied by the psychoacoustic envelope magnitude increment. The results were interpreted as suggesting (1) the probable role of low-frequency energy modulations in p-center perception, and (2) the presence of perceptual processes that integrate multiple articulatory events into a single syllabic event.  相似文献   

20.
On each trial, subjects were presented monaurally with single synthetic speech syllables. In Experiment I, when /ba/ and /ta/ specified one response, and /da/ and /ka/ another response, a right-ear advantage in reaction time was observed; when/ba/ specified one response and all the other stimuli specified the other response, no ear effect was observed. Unsuccessful attempts to obtain a monaural right-ear advantage for consonants in some reaction-time tasks might be due to some kind of prephonetic matching between a representation of the stimulus attended to and the presented stimulus, the output of this match providing sufficient information for response. In Experiment II, /bi/ and /b ? / specified one response and /b?/ and/bu/ the other response, but no ear effect was observed. It was concluded that the right-ear advantage displayed for consonants in the corresponding condition of Experiment I was not the pure effect of a particular stimulus-response mapping, but depended also on the phonetic properties of consonants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号