首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Dichotic listening performance for different classes of speech sounds was examined under conditions of controlled attention. Consideration of the complex of target item and competing item demonstrated that, in general, targets were more accurately identified when the competing item shared no relevant features with it and less accurately identified when the competing item shared place, voice, or manner with the target item. Nasals as well as stops demonstrated a significant right-ear advantage (REA). False alarm rates were very similar for left and right attentional conditions, whereas intrusions from the right ear while attending to the left were far more common than intrusions from the left while attending to the right. Attention is viewed as serving to select the stimuli that will be reported, but at a late stage, and only after the right ear perceptual advantage has had its effect. A model of dichotic listening performance is proposed in which both the ease of localizing the item and the strength of evidence for the presence of the item are relevant factors.  相似文献   

2.
Stimulus and task factors as determinants of ear advantages   总被引:1,自引:0,他引:1  
Two dichotic experiments are reported which dissociate stimulus and task factors in perceptual lateralization. With only trajectories of fundamental frequency as a distinguishing cue, perception of the voicing of stop consonants gives a right ear advantage. Identification of the emotional tone of a sentence of natural speech gives a left ear advantage. If such parameters as fundamental frequency variation or overall naturalness of the speech material determined the direction of an ear advantage, the reverse pattern of results would have been obtained. Hence the task appears more important than the nature of the stimulus.  相似文献   

3.
An experiment was designed to assess the contribution of attentional set to performance on a forced choice recognition task in dichotic listening. Subjects were randomly assigned to one of three conditions: speech sounds composed of stop consonants, emotional nonspeech sounds, or a random combination of both. In the groups exposed to a single class of stimuli (pure-list), a REA (right ear advantage) emerged for the speech sounds, and a LE (left ear advantage) for the nonspeech sounds. Under mixed conditions using both classes of stimuli, no significant ear advantage was apparent, either globally or individually for the speech and nonspeech sounds. However, performance was more accurate for the left ear on nonspeech sounds and for the right ear for speech sounds, regardless of pure versus mixed placement. The results suggest that under divided attention conditions, attentional set influences the direction of the laterality effect.  相似文献   

4.
Abstract— In a pitch discrimination task, subjects were faster and more accurate in judging low-frequency sounds when these stimuli were presented to the left ear, compared with the right ear. In contrast, a right-ear advantage was found with high-frequency sounds. The effect was in terms of relative frequency and not absolute frequency, suggesting that the effect arisen from pastsensory mechanisms. A simitar laterality effect has been reported in visual perception with stimuli varying in spatial frequency. These multimodal laterality effects may reflect a general computational difference between the two cerebral hemispheres, with the left hemisphere biased for processing high-frequency information and the right hemisphere biased for processing low-frequency information.  相似文献   

5.
刘丽  彭聃龄 《心理学报》2004,36(3):260-264
采用双耳分听的任务探讨了汉语普通话声调加工的右耳优势问题,并引进反应手的因素,探讨了汉语声调加工的右耳优势的机制。结果表明,汉语母语被试对普通话声调的加工存在右耳、左脑优势,但这种优势是相对的,右脑也具备加工声调信息的能力,结果支持了直接通达模型。  相似文献   

6.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

7.
The effect of attention on cerebral dominance and the asymmetry between left and right ears was investigated using a selective listening task. Right handed subjects were presented with simultaneous dichotic speech messages; they shadowed one message in either the right or left ear and at the same time tapped with either the right or the left hand when they heard a specified target word in either message. The ear asymmetry was shown only when subjects' attention was focused on some other aspect of the task: they tapped to more targets in the right ear, but only when these came in the non-shadowed message; they made more shadowing errors with the left ear message, but chiefly for non-target words. The verbal response of shadowing showed the right ear dominance more clearly than the manual response of tapping. Tapping with the left hand interfered more with shadowing than tapping with the right hand, but there was little correlation between the degree of hand and of ear asymmetry over individual subjects. The results support the idea that the right ear dominance is primarily a quantitative difference in the distribution of attention to left and right ear inputs reaching the left hemisphere speech areas. This affects both the efficiency of speech perception and the degree of response competition between simultaneous verbal and manual responses.  相似文献   

8.
The dichotic perception of Mandarin tones by native and nonnative listeners was examined in order to investigate the lateralization of lexical tone. Twenty American listeners with no tone language background and 20 Chinese listeners were asked to identify dichotically presented tone pairs by identifying which tone they heard in each ear. For the Chinese listeners, 57% of the total errors occurred via the left ear, indicating a significant right ear advantage. However, the American listeners revealed no significant ear preference, with 48% of the errors attributable to the left ear. These results indicated that Mandarin tones are predominantly processed in the left hemisphere by native Mandarin speakers, whereas they are bilaterally processed by American English speakers with no prior tone experience. The results also suggest that the left hemisphere superiority for native Mandarin tone processing is similar to native processing of other tone languages.  相似文献   

9.
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing.  相似文献   

10.

The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.

  相似文献   

11.
Persons with Down syndrome (DS) tend to exhibit an atypical left ear-right hemisphere advantage (LEA) for the perception of speech sounds. In the present study, a recent adaptation of the dichotic listening procedure was employed to examine interhemispheric integration during the performance of a lateralized verbal-motor task. Although adults with DS (n = 13) demonstrated a right ear-left hemisphere advantage in the dichotic-motor task similar to their peers with (n = 14) and without undifferentiated developmental disabilities (n = 14), they showed an LEA in a free recall dichotic listening task. Based on a comparison of the laterality indices obtained from both dichotic listening procedures, it appears that the manifestation of lateral ear advantages in persons DS may dependent on the response requirements of the task.  相似文献   

12.
Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.  相似文献   

13.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

14.
The study of cerebral specialization in persons with Down syndrome (DS) has revealed an anomalous pattern of organization. Specifically, dichotic listening studies (e.g., Elliott & Weeks, 1993) have suggested a left ear/right hemisphere dominance for speech perception for persons with DS. In the current investigation, the cerebral dominance for speech production was examined using the mouth asymmetry technique. In right-handed, nonhandicapped subjects, mouth asymmetry methodology has shown that during speech, the right side of the mouth opens sooner and to a larger degree then the left side (Graves, Goodglass, & Landis, 1982). The phenomenon of right mouth asymmetry (RMA) is believed to reflect the direct access that the musculature on the right side of the face has to the left hemisphere's speech production systems. This direct access may facilitate the transfer of innervatory patterns to the muscles on the right side of the face. In the present study, the lateralization for speech production was investigated in 10 right-handed participants with DS and 10 nonhandicapped subjects. A RMA at the initiation and end of speech production occurred for subjects in both groups. Surprisingly, the degree of asymmetry between groups did not differ, suggesting that the lateralization of speech production is similar for persons with and persons without DS. These results support the biological dissociation model (Elliott, Weeks, & Elliott, 1987), which holds that persons with DS display a unique dissociation between speech perception (right hemisphere) and speech production (left hemisphere).  相似文献   

15.
Hemispheric asymmetry for perception of non-verbal emotional human voices was investigated in normal subjects by dichotic listening utilizing non-verbal responses. As in former experiments, a slight but significant left ear superiority was found. This finding suggests that mediation of these stimuli is done by the right hemisphere. The dominant role of the right hemisphere in this perceptual task is discussed in terms of an earlier development of this hemisphere.  相似文献   

16.
Right hemisphere EEG sensitivity to speech   总被引:3,自引:2,他引:1  
Recent speech perception work with normals and aphasics suggests that the right hemisphere may be more adept than the left at making the voicing discrimination, and the reverse for place of articulation. We examined this right hemisphere voicing effect with natural speech stimuli: stop consonants in pre-, mid-, and postvocalic contexts. Using a neuroelectric event-related potential paradigm, we found numerous effects indicating bilateral components reflecting the voicing and place contrast and unique right hemisphere discrimination of both voicing and place of articulation.  相似文献   

17.
American English liquids /r/ and /l/ have been considered intermediate between stop consonants and vowels acoustically, articulatorily, phonologically, and perceptually. Cutting (1947a) found position-dependent ear advantages for liquids in a dichotic listening task: syllable-initial liquids produced significant right ear advantages, while syllable-final liquids produced no reliable ear advantages. The present study employed identification and discrimination tasks to determine whether /r/and /l/ are perceived differently depending on syllable position when perception is tested by a different method. Fifteen subjects listened to two synthetically produced speech series—/li/ to /ri/ and /il/ to /ir/—in which stepwise variations of the third formant cued the difference in consonant identity. The results indicated that: (1) perception did not differ between syllable positions (in contrast to the dichotic listening results), (2) liquids in both syllable positions were perceived categorically, and (3) discrimination of a nonspeech control series did not account for the perception of the speech sounds.  相似文献   

18.
Fifty right-handed patients with focal temporal lobe epilepsy were administered a dichotic listening test with consonant-vowel syllables under non-forced, forced right and forced left attention conditions, and a neuropsychological test battery. Dichotic listening performance was compared in subgroups with and without left hemisphere cognitive dysfunction, measured by the test battery, and in subgroups with left and right temporal epileptic focus. Left hemisphere cognitive dysfunction led to more correct responses to left ear stimuli in all three attention conditions, and fewer correct responses to right ear stimuli in the non-forced attention condition. This was probably caused by basic left hemisphere perceptual dysfunction. Dichotic listening was less affected by a left-sided epileptic focus than by left hemisphere cognitive dysfunction. General cognitive functioning influenced dichotic listening performance stronger in forced than in non-forced attention conditions. Larger cerebral networks were probably involved in the forced attention conditions due to the emphasis on conscious effort.  相似文献   

19.
Young children are frequently exposed to sounds such as speech and music in noisy listening conditions, which have the potential to disrupt their learning. Missing input that is masked by louder sounds can, under the right conditions, be ‘filled in’ by the perceptual system using a process known as perceptual restoration. This experiment compared the ability of 4‐ to 6‐year‐old children, 9‐ to 11‐year‐old children and adults to complete a melody identification task using perceptual restoration. Melodies were presented either intact (complete input), with noise‐filled gaps (partial input; perceptual restoration can occur) or with silence‐filled gaps (partial input; perceptual restoration cannot occur). All age groups could use perceptual restoration to help them interpret partial input, yet perception was the most detrimentally affected by the presentation of partial input for the youngest children. This implies that they might have more difficulty perceiving sounds in noisy environments than older children or adults. Young children had particular difficulty using partial input for identification under conditions where perceptual restoration could not occur. These findings suggest that perceptual restoration is a crucial mechanism in young children, where processes that fill in missing sensory input represent an important part of the way meaning is extracted from a complex sensory world. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
This study examined the relation between selective attention, perception, and memory factors in the generation of auditory asymmetries. Sixty subjects were randomly assigned to one of three dichotic listening groups. One group was presented with paired linguistic stimuli, a second group with dichotic nonverbal material, while a third group heard randomly inteterspersed verbal and nonverbal pairs. Order of ear report was controlled in all three groups. Significant right ear advantages on first and second reports were found in the verbal group, and a similar pattern of left ear advantages was found in the nonverbal group. This ear by material dissociation was only found on second ear reports in the group which heard the randomly interspersed pairs. No first report ear advantages were evident in the latter group. These results are discussed in terms of the independence of perceptual and memory mechanisms in the production of auditory asymmetries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号