首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When a formant transition and the remainder of a syllable are presented to subjects' opposite ears, most subjects perceive two simultaneous sounds: a syllable and a nonspeech chirp. It has been demonstrated that, when the remainder of the syllable (base) is kept unchanged, the identity of the perceived syllable will depend on the kind of transition presented at the opposite ear. This phenomenon, called duplex perception, has been interpreted as the result of the independent operation of two perceptual systems or modes, the phonetic and the auditory mode. In the present experiments, listeners were required to identify and discriminate such duplex syllables. In some conditions, the isolated transition was embedded in a temporal sequence of capturing transitions sent to the same ear. This streaming procedure significantly weakened the contribution of the transition to the perceived phonetic identity of the syllable. It is likely that the sequential integration of the isolated transition into a sequence of capturing transitions affected its fusion with the contralateral base. This finding contrasts with the idea that the auditory and phonetic processes are operating independently of each other. The capturing effect seems to be more consistent with the hypothesis that duplex perception occurs in the presence of conflicting cues for the segregation and the integration of the isolated transition with the base.  相似文献   

2.
The third-formant (F3) transition of a three-formant /da/ or /ga/ syllable was extracted and replaced by sine-wave transitions that followed the F3 centre frequency. The syllable without the F3 transition (base) was always presented at the left ear, and a /da/ (falling) or /ga/ (rising) sine-wave transition could be presented at either the left, the right, or both ears. The listeners perceived the base as a syllable, and the sine-wave transition as a non-speech whistle, which was lateralized near the left ear, the right ear, or the middle of the head, respectively. In Experiment 1, the sine-wave transition strongly influenced the identity of the syllable only when it was lateralized at the same ear as the base (left ear). Phonetic integration between the base and the transitions became weak, but was not completely eliminated, when the latter was perceived near the middle of the head or at the opposite ear as the base (right ear). The second experiment replicated these findings by using duplex stimuli in which the level of the sine-wave transitions was such that the subjects could not reliably tell whether a /da/ or a /ga/ transition was present at the same ear as the base. This condition was introduced in order to control for the possibility that the subjects could have identified the syallables by associating a rising or falling transition presented at the left ear with a /da/ or /ga/ percept. Alternative suggestions about the relation between speech and non-speech perceptual processes are discussed on the basis of these results.  相似文献   

3.
Duplex perception occurs when the phonetically distinguishing transitions of a syllable are presented to one ear and the rest of the syllable (the “base”) is simultaneously presented to the other ear. Subjects report hearing both a nonspeech “chirp” and a speech syllable correctly cued by the transitions. In two experiments, we compared phonetic identification of intact syllables, duplex percepts, isolated transitions, and bases. In both experiments, subjects were able to identify the phonetic information encoded into isolated transitions in the absence of an appropriate syllabic context. Also, there was no significant difference in phonetic identification of isolated transitions and duplex percepts. Finally, in the second experiment, the category boundaries from identification of isolated transitions and duplex percepts were not significantly different from each other. However, both boundaries were statistically different from the category boundary for intact syllables. Taken together, these results suggest that listeners do not need to perceptually integrate F2 transitions or F2 and F3 transition pairs with the base in duplex perception. Rather, it appears that listeners identify the chirps as speech without reference to the base.  相似文献   

4.
If a place-of-articulation contrast is created between the auditory and the visual component syllables of videotaped speech, frequently the syllable that listeners report they have heard differs phonetically from the auditory component. These “McGurk effects”, as they have come to be called, show that speech perception may involve some kind of intermodal process. There are two classes of these phenomena: fusions and combinations. Perception of the syllable /da/ when auditory /ba/ and visual /ga/ are presented provides a clear example of the former, and perception of the string /bga/ after presentation of auditory /ga/ and visual /ba/ an unambiguous instance of the latter. Besides perceptual fusions and combinations, hearing visually presented component syllables also shows an influence of vision on audition. It is argued that these “visual” responses arise from basically the same underlying processes that yield fusions and combinations, respectively. In the present study, the visual component of audiovisually incongruous CV-syllables was presented in the left and the right visual hemifield, respectively. Audiovisual fusion responses showed a left hemifield advantage, and audiovisual combination responses a right hemifield advantage. This finding suggests that the process of audiovisual integration differs between audiovisual fusions and combinations and, furthermore, that the two cerebral hemispheres contribute differentially to the two classes of response.  相似文献   

5.
When the (vocalic) formant transitions appropriate for the stops in a synthetic approximation to [spa] or [sta] are presented to one ear and the remainder of the acoustic pattern to the other, listeners report a duplex percept. One side of the duplexity is the same coherent syllable ([spa] or [sta]) that is perceived when the pattern is presented in its original, undivided form; the other is a nonspeech chirp that corresponds to what the transitions sound like in isolation. This phenomenon is here used to determine why, in the case of stops, silence is an important cue. The results show that the silence cue affects the formant transitions differently when, on the one side of the duplex percept, the transitions support the perception of stop consonants, and when, on the other, they are perceived as nonspeech chirps. This indicates that the effectiveness of the silence cue is owing to distinctively phonetic (as against generally auditory) processes.  相似文献   

6.
Sætrevik, B. (2010). The influence of visual information on auditory lateralization. Scandinavian Journal of Psychology. The classic McGurk study showed that presentation of one syllable in the visual modality simultaneous with a different syllable in the auditory modality creates the perception of a third, not presented syllable. The current study presented dichotic syllable pairs (one in each ear) simultaneously with video clips of a mouth pronouncing the syllables from one of the ears, or pronouncing a syllable that was not part of the dichotic pair. When asked to report the auditory stimuli, responses were shifted towards selecting the auditory stimulus from the side that matched the visual stimulus.  相似文献   

7.
The manipulation of voice onset time (VOT) during dichotic listening has provided novel insights regarding brain function. To date, the most common design is the utilisation of four VOT conditions: short-long pairs (SL), where a CV syllable with a short VOT is presented to the left ear and a CV syllable with a long VOT is presented to the right ear as well as long-short (LS), short-short (SS) and long-long (LL) pairs. Rimol, Eichele, and Hugdahl (2006) first reported that in healthy adults SL pairs elicit the largest REA while, in fact, LS pairs elicit a significant left ear advantage (LEA). This VOT effect was replicated by Sandmann et al. (2007). A study of children aged 5-8years of age has shown a developmental trajectory whereby long VOTs gradually start to dominate over short VOTs when LS pairs are being presented under dichotic conditions (Westerhausen, Helland, Ofte, & Hugdahl, 2010). Two studies have investigated attentional modulation of the VOT effect in children and adults. The converging evidence from these studies shows that at around 9years of age children lack the adult-like cognitive flexibility required to exert top-down control over stimulus-driven bottom-up processes (Andersson, Llera, Rimol, & Hugdahl, 2008; Arciuli, Rankine, & Monaghan, 2010). Arciuli et al. further demonstrated that this kind of cognitive flexibility is a predictor of proficiency with complex tasks such as reading. A review of each of these studies, the possible mechanisms underlying the VOT effect and directions for future research are discussed.  相似文献   

8.
The paradigm of dichotic listening was used to investigate verbal comprehension in the right, so-called “nonverbal,” hemisphere. Verbal commands were presented to the right and left ears in the simultaneous (dichotic) paradigm. There were striking instances, especially when the left hemisphere was occupied with some extraneous task, in which the right hemisphere understood the verbal command and executed the appropriate motor responses. In those instances the left hemisphere gave no overt response. Although the left hemisphere was usually dominant, it can be nevertheless concluded that not only can the right hemisphere understand verbal commands but can also express itself manually by executing actions more complex than object retrieval or pointing. As has been known for some time, the blockage of the ipsilateral pathway seems so complete during dichotic listening in the commissurotomy patient that there is no report of the words in the left ear—only of those presented to the right. At the same time there is normal report when words are presented to the left ear alone. It was found in the present study, however, that this model is too simple and only applies to the verbal response paradigm of dichotic listening. Under circumstances of dichotic presentation where the stimulus in the left ear (ipsilateral pathway) is necessary or important to the left hemisphere for completing a task, words from both pathways are reported. One may conclude that there exists a gating mechanism in each hemisphere that controls the monitoring of each auditory pathway and the degree of ipsilateral suppression.  相似文献   

9.
We report fMRI and behavioral data from 113 subjects on attention and cognitive control using a variant of the classic dichotic listening paradigm with pairwise presentations of consonant-vowel syllables. The syllable stimuli were presented in a block-design while subjects were in the MR scanner. The subjects were instructed to pay attention to and report either the left or right ear stimulus. The hypothesis was that paying attention to the left ear stimulus (FL condition) induces a cognitive conflict, requiring cognitive control processes, not seen when paying attention to the right ear stimulus (FR condition), due to the perceptual salience of the right ear stimulus in a dichotic situation. The FL condition resulted in distinct activations in the left inferior prefrontal gyrus and caudate nucleus, while the right inferior frontal gyrus and caudate were activated in both the FL and FR conditions, and in a non-instructed (NF) baseline condition.  相似文献   

10.
The purpose of this study was to analyze asymmetry in echoic memory as a relevant factor in language perception. Two experimental procedures were used: the presentation of temporally segmented words in fragments of 40, 80, 120 and 240 msec, separated by intervals of 40, 80, 120 and 240 msec, similar to the procedures used by A. W. F. Huggins (1975, Perception & Psychophysics, 18, 149-157); the presentation of two tones of short duration, "high" and "low," followed by an interference tone equivalent to the mean frequency of the two tones, closely following the procedure used by D. W. Massaro (1975, in D.W. Massaro (Ed.), Understanding language, New York: Academic Press). A stereophonic tape recorder was used as follows: one channel was employed for the presentation of the words or tones while, through the other channel, the subject received a white noise equivalent in intensity. All subjects carried out the task twice (right ear, left ear) and the order of presentation was counterbalanced. Only the first task showed differences between ears. Implications of the results are analyzed.  相似文献   

11.
The aim of the present study was to investigate the ability of children with attention deficit/hyperactivity disorder-combined subtype (ADHD-C) and predominantly inattentive subtype (ADHD-PI) to direct their attention and to exert cognitive control in a forced attention dichotic listening (DL) task. Twenty-nine, medication-naive participants with ADHD-C, 42 with ADHD-PI, and 40 matched healthy controls (HC) between 9 and 16 years were assessed. In the DL task, two different auditory stimuli (syllables) are presented simultaneously, one in each ear. The participants are asked to report the syllable they hear on each trial with no instruction on focus of attention or to explicitly focus attention and to report either the right- or left-ear syllable. The DL procedure is presumed to reflect different cognitive processes: perception (nonforced condition/NF), attention (forced-right condition/FR), and cognitive control (forced-left condition/FL). As expected, all three groups had normal perception and attention. The children and adolescents with ADHD-PI showed a significant right-ear advantage also during the FL condition, while the children and adolescents in the ADHD-C group showed a no-ear advantage and the HC showed a significant left-ear advantage in the FL condition. This suggests that the ADHD subtypes differ in degree of cognitive control impairment. Our results may have implications for further conceptualization, diagnostics, and treatment of ADHD subtypes.  相似文献   

12.
Children between the ages of 5 and 12 years were tested with dichotic listening tests utilizing single syllable words and random presentations of digits. They produced a higher prevalence of left ear dominance than expected, especially among right-handed children when tested with words. Whether more children demonstrate the LEA because of right hemisphere dominance for language or because there is less stability in ear advantage direction at younger ages cannot be fully resolved by this study. When ear advantages were measured by subtracting each child's lower score from the higher score without regard to right or left direction, an age-related trend toward lower measures of ear advantage was evident. This trend was greater for dichotic words than for dichotic digits. Structural factors that may be related to these results and possible influences of attention and verbal workload on the two kinds of dichotic stimuli are discussed.  相似文献   

13.
The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence specific, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) Differential contributions of the 2 cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19, 609-613]. To account for valence laterality effects in emotion perception we propose an 'expectancy' hypothesis which suggests that valence effects are obtained when the top-down expectancy to perceive an emotion outweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task was used to examine alternative explanations of valence effects in emotion perception. Emotional sentences (spoken in a happy or sad tone of voice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotion sentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting of two identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothesis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no ear advantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality effect may be a useful tool for the detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the effect of ear lead. Participants tended to select the ear that received the sentence first, when they expected a 'sad' sentence, but not when they expected a 'happy' sentence. The results are discussed in relation to the different theoretical explanations of valence laterality effects in emotion perception.  相似文献   

14.
刘丽  彭聃龄 《心理学报》2004,36(3):260-264
采用双耳分听的任务探讨了汉语普通话声调加工的右耳优势问题,并引进反应手的因素,探讨了汉语声调加工的右耳优势的机制。结果表明,汉语母语被试对普通话声调的加工存在右耳、左脑优势,但这种优势是相对的,右脑也具备加工声调信息的能力,结果支持了直接通达模型。  相似文献   

15.
Following hemispherectomy, patients performing dichotic listening tasks have great difficulty reporting items presented to the ear ipsilateral to their intact hemisphere. One possible cause is an attentional imbalance which, when competing inputs are received from both sides of space, restricts the patient's attention to the stimulus contralateral to his remaining hemisphere. If this explanation is true, then a similar ipsilateral ear decrement should occur when a competing visual stimulus is presented in the contralateral field. In the present study, however, hemispherectomy patients easily reported a digit presented to the ear ipsilateral to their intact hemisphere despite the concurrent presentation of a visual digit to their contralateral field.  相似文献   

16.
Ear of input as a determinant of pitch-memory interference   总被引:3,自引:0,他引:3  
Six experiments to evaluate the effect of presentation ear on pitch-memory interference were conducted using undergraduates as listeners. The task was to compare the pitch of two tones that were separated by an interval that included eight interpolated tones; the interpolated tones were presented either ipsilaterally or contralaterally to the presentation ear of the comparison tones. When the ear of interpolated-tone presentations was blocked, and therefore predictable, ipsilateral interference was greater than contralateral. In contrast, when the interpolated-tone presentation ear was varied randomly from trial to trial, ipsilateral and contralateral interferences were equivalent. These results are analogous to results found in previously reported auditory backward recognition masking (ABRM) experiments and suggest that the ABRM effect may be due, in part, to pitch-memory interference. Implications for theories of auditory processing and memory are discussed.  相似文献   

17.
Categorical perception of nonspeech chirps and bleats   总被引:1,自引:0,他引:1  
Mattingly, Liberman, Syrdal, and Halwes, (1971) claimed to demonstrate that subjects cannot classify nonspeech chirp and bleat continua, but that they can classify into three categories a syllable place continuum whose variation is physically identical to the nonspeech chirp and bleat continua. This finding for F2 transitions, as well as similar findings for F3 transitions, has been cited as one source of support for theories that different modes or modules underlie the perception of speech and nonspeech acoustic stimuli. However, this pattern of finding for speech and nonspeech continua may be the result of research methods rather than a true difference in subject ability. Using tonal stimuli based on the nonspeech stimuli of Mattingly et al., we found that subjects, with appropriate practice, could classify nonspeech chirp, short bleat, and bleat continua with boundaries equivalent to the syllable place continuum of Mattingly et al. With the possible exception of the higher frequency boundary for both our bleats and the Mattingly syllables, ABX discrimination peaks were clearly present and corresponded in location to the given labeling boundary.  相似文献   

18.
The study concerned discriminating between ear of entry and apparent spatial position as possible determinants of lateral asymmetries in the recall of simultaneous speech messages. Apparent localization to the left or right of the median plane was created either through a time difference (.7 msec), through intensity differences between presentations of the same verbal message at the two ears, or through dichotic presentations. Right-side advantage was observed with the three types of presentation (Experiments 1, 2, and 3). The finding of right-side advantage with stereophony based on a time difference only, in the absence of intensity difference, cannot be accounted for in terms of an ear advantage and shows that apparent spatial separation of the sources can by itself produce a laterality effect. Differences in the degree of lateral asymmetry between the various conditions were also observed. The findings of Experiments 4 and 5 suggest that these differences are better explained in terms of different impressions of localization of the sound sources than in terms of relative intensity at the "privileged" ear.  相似文献   

19.
Three experiments examined the lateralization of lexical codes in auditory word recognition. In Experiment 1 a word rhyming with a binaurally presented cue word was detected faster when the cue and target were spelled similarly than when they were spelled differently. This orthography effect was larger when the target was presented to the right ear than when it was presented to the left ear. Experiment 2 replicated the interaction between ear of presentation and orthography effect when the cue and target were spoken in different voices. In Experiment 3, subjects made lexical decisions to pairs of stimuli presented to the left or the right ear. Lexical decision times and the amount of facilitation which obtained when the target stimuli were semantically related words did not differ as a function of ear of presentation. The results suggest that the semantic, phonological, and orthographic codes for a word are represented in each hemisphere; however, orthographic and phonological representations are integrated only in the left hemisphere.  相似文献   

20.
American English liquids /r/ and /l/ have been considered intermediate between stop consonants and vowels acoustically, articulatorily, phonologically, and perceptually. Cutting (1947a) found position-dependent ear advantages for liquids in a dichotic listening task: syllable-initial liquids produced significant right ear advantages, while syllable-final liquids produced no reliable ear advantages. The present study employed identification and discrimination tasks to determine whether /r/and /l/ are perceived differently depending on syllable position when perception is tested by a different method. Fifteen subjects listened to two synthetically produced speech series—/li/ to /ri/ and /il/ to /ir/—in which stepwise variations of the third formant cued the difference in consonant identity. The results indicated that: (1) perception did not differ between syllable positions (in contrast to the dichotic listening results), (2) liquids in both syllable positions were perceived categorically, and (3) discrimination of a nonspeech control series did not account for the perception of the speech sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号