首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Serial-verbal short-term memory is impaired by irrelevant sound, particularly when the sound changes acoustically (the changing-state effect). In contrast, short-term recall of semantic information is impaired only by the semanticity of irrelevant speech, particularly when it is semantically related to the target memory items (the between-sequence semantic similarity effect). Previous research indicates that the changing-state effect is larger when the sound is presented to the left ear in comparison to the right ear, the left ear disadvantage. In this paper, we report a novel finding whereby the between-sequence semantic similarity effect is larger when the irrelevant speech is presented to the right ear in comparison to the left ear, but this right ear disadvantage is found only when meaning is the basis of recall (Experiments 1 and 3), not when order is the basis of recall (Experiment 2). Our results complement previous research on hemispheric asymmetry effects in cross-modal auditory distraction by demonstrating a role for the left hemisphere in semantic auditory distraction.  相似文献   

2.
Subjects were asked to judge the position of a click that occurred during a short piece of music. Clicks were, on average, judged to be later than their actual position. The click and the music were presented through headphones to different ears, and the clicks were judged to be significantly later if they arrived at the right ear rather than at the left. There was also a significant tendency for clicks to be attracted to phrase boundaries in the music. These last two results are similar to those from experiments with a click during speech, but the late judgments of a click in music contrast with the early judgments of a click in speech.  相似文献   

3.
The task is to estimate on which syllable of a spoken sentence a click was superimposed. Experiment I confirms Ladefoged and Broadbent's finding of a systematic tendency to prepose the click (negative displacement), but shows also that the tendency is decreased when prior knowledge of the sentence is provided. Experiment II shows that acoustic prior knowledge is not necessary to produce the decrease and that it occurs also with textual prior knowledge. Experiment III shows that the negative displacement is not eliminated by short-term practice on the task, as Fodor and Bever contended. The effect of prior knowledge is inconsistent with the explanation of negative displacement in terms of attention demands suggested by Ladefoged and Broadbent. It is argued that this explanation was unnecessary, and that negative displacement can be expected in a system which analyses speech by discrete units.  相似文献   

4.
The effect of attention on cerebral dominance and the asymmetry between left and right ears was investigated using a selective listening task. Right handed subjects were presented with simultaneous dichotic speech messages; they shadowed one message in either the right or left ear and at the same time tapped with either the right or the left hand when they heard a specified target word in either message. The ear asymmetry was shown only when subjects' attention was focused on some other aspect of the task: they tapped to more targets in the right ear, but only when these came in the non-shadowed message; they made more shadowing errors with the left ear message, but chiefly for non-target words. The verbal response of shadowing showed the right ear dominance more clearly than the manual response of tapping. Tapping with the left hand interfered more with shadowing than tapping with the right hand, but there was little correlation between the degree of hand and of ear asymmetry over individual subjects. The results support the idea that the right ear dominance is primarily a quantitative difference in the distribution of attention to left and right ear inputs reaching the left hemisphere speech areas. This affects both the efficiency of speech perception and the degree of response competition between simultaneous verbal and manual responses.  相似文献   

5.
Previous research has shown a tendency for people to imagine simple sentences as evolving from left to right, with the sentence subject being located to the left of the object. In two cross-cultural studies comparing Italian and Arab participants, we investigated whether this bias is a function of hemispheric specialization or of directionality of written language (left to right in Italian, right to left in Arabic). Both studies found a reversal of directional bias in Arabs. Italians tended to position the subject to the left of the object, and Arabs tended to position the subject to the right of the object (Experiment 1); both groups were facilitated in a sentence-picture matching task when the subject was drawn in the position that it would usually occupy in the written language (left for Italians, right for Arabs; Experiment 2). In Experiment 2, an additional, language-independent facilitation was observed when action evolved from left to right, suggesting that both hemispheric specialization and scanning habit affect visual imaging.  相似文献   

6.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   

7.
Three experiments examined the lateralization of lexical codes in auditory word recognition. In Experiment 1 a word rhyming with a binaurally presented cue word was detected faster when the cue and target were spelled similarly than when they were spelled differently. This orthography effect was larger when the target was presented to the right ear than when it was presented to the left ear. Experiment 2 replicated the interaction between ear of presentation and orthography effect when the cue and target were spoken in different voices. In Experiment 3, subjects made lexical decisions to pairs of stimuli presented to the left or the right ear. Lexical decision times and the amount of facilitation which obtained when the target stimuli were semantically related words did not differ as a function of ear of presentation. The results suggest that the semantic, phonological, and orthographic codes for a word are represented in each hemisphere; however, orthographic and phonological representations are integrated only in the left hemisphere.  相似文献   

8.
In two experiments, readers' use of spatial memory was examined by asking them to determine whether an individually shown probe word had appeared in a previously read sentence (Experiment 1) or had occupied a right or left sentence location (Experiment 2). Under these conditions, eye movements during the classification task were generally directed toward the right, irrespective of the location of the relevant target in the previously read sentence. In two additional experiments, readers' knowledge of prior sentence content was examined either without (Experiment 3) or with (Experiment 4) an explicit instruction to move the eyes to a target word in that sentence. Although regressions into the prior sentence were generally directed toward the target, they rarely reached it. In the absence of accurate spatial memories, readers reached previously read target words in two distinct steps--one that moved the eyes in the general vicinity of the target, and one that homed in on it.  相似文献   

9.
The third-formant (F3) transition of a three-formant /da/ or /ga/ syllable was extracted and replaced by sine-wave transitions that followed the F3 centre frequency. The syllable without the F3 transition (base) was always presented at the left ear, and a /da/ (falling) or /ga/ (rising) sine-wave transition could be presented at either the left, the right, or both ears. The listeners perceived the base as a syllable, and the sine-wave transition as a non-speech whistle, which was lateralized near the left ear, the right ear, or the middle of the head, respectively. In Experiment 1, the sine-wave transition strongly influenced the identity of the syllable only when it was lateralized at the same ear as the base (left ear). Phonetic integration between the base and the transitions became weak, but was not completely eliminated, when the latter was perceived near the middle of the head or at the opposite ear as the base (right ear). The second experiment replicated these findings by using duplex stimuli in which the level of the sine-wave transitions was such that the subjects could not reliably tell whether a /da/ or a /ga/ transition was present at the same ear as the base. This condition was introduced in order to control for the possibility that the subjects could have identified the syallables by associating a rising or falling transition presented at the left ear with a /da/ or /ga/ percept. Alternative suggestions about the relation between speech and non-speech perceptual processes are discussed on the basis of these results.  相似文献   

10.
Ear advantages for CV syllables were determined for 28 right-handed individuals in a target monitoring dichotic task. In addition, ear dominance for dichotically presented tones was determined when the frequency difference of the two tones was small compared to the center frequency and when the frequency difference of the tones was larger. On all three tasks, subjects provided subjective separability ratings as measures of the spatial complexity of the dichotic stimuli. The results indicated a robust right ear advantage (REA) for the CV syllables and a left ear dominance on the two tone tasks, with a significant shift toward right ear dominance when the frequency difference of the tones was large. Although separability ratings for the group data indicated an increase in the perceived spatial separation of the components of the tone complex across the two tone tasks, the separability judgment ratings and the ear dominance scores were not correlated for either tone task. A significant correlation, however, was evidenced between the laterality measure for speech and the judgment of separability, indicating that a REA of increased magnitude is associated with more clearly localized and spatially separate speech sounds. Finally, the dominance scores on the two tone tasks were uncorrelated with the laterality measures of the speech task, whereas the scores on the tone tasks were highly correlated. The results suggest that spatial complexity does play a role in the emergence of the REA for speech. However, the failure to find a relationship between speech and nonspeech tasks suggest that all perceptual asymmetries observed with dichotic stimuli cannot be accounted for by a single theoretical explanation.  相似文献   

11.
Two experiments were conducted in which monaural clicks were presented to the right or left ear preceded by biaural verbal (Experiment 1) and musical (Experiment 2) warnings. After the “neutral” warnings, the clicks could be presented to the right or left ear equally often (50%); after the warnings which directed the attention to the left or right ear, the clicks could be presented to either the “expected” (67%) or to the “unexpected” (33%) ear. In Experiment 1 there was a cost effect for the “unexpected” ear and reaction times were significantly faster when the clicks were presented to the right ear. In Experiment 2, the musical warnings brought about a cost effect while no significant ear advantage was observed.  相似文献   

12.
Ross's 1981 model of right-hemisphere processing of affective speech components was investigated within the dichotic paradigm. A spoken sentence constant in semantic content but varying among mad, sad, and glad emotional tones was presented to 45 male and 45 female college students. Duration of stimuli was controlled by adjusting digital sound samples to a uniform length. No effect of sex emerged, but the hypothesized ear advantage was found: more correct identifications were made with the left ear than with the right. A main effect of prosody was also observed, with significantly poorer performance in identifying the sad tone; in addition, sad condition scores for the right ear were more greatly depressed than those for the left ear, resulting in a significant interaction of ear and prosody.  相似文献   

13.
The study concerned discriminating between ear of entry and apparent spatial position as possible determinants of lateral asymmetries in the recall of simultaneous speech messages. Apparent localization to the left or right of the median plane was created either through a time difference (.7 msec), through intensity differences between presentations of the same verbal message at the two ears, or through dichotic presentations. Right-side advantage was observed with the three types of presentation (Experiments 1, 2, and 3). The finding of right-side advantage with stereophony based on a time difference only, in the absence of intensity difference, cannot be accounted for in terms of an ear advantage and shows that apparent spatial separation of the sources can by itself produce a laterality effect. Differences in the degree of lateral asymmetry between the various conditions were also observed. The findings of Experiments 4 and 5 suggest that these differences are better explained in terms of different impressions of localization of the sound sources than in terms of relative intensity at the "privileged" ear.  相似文献   

14.
The present study compared the processing of direction for up and down arrows and for left and right arrows in visual displays. Experiment 1 demonstrated that it is more difficult to deal with left and right than with up and down when the two directions must be discriminated but not when they must simply be oriented to. Experiments 2 and 3 showed that telling left from right is harder regardless of whether the responses are manual or verbal. Experiment 4 showed that left-right discriminations take longer than up-down discriminations for judgments of position as well as direction. In Experiment 5 it was found that position information can intrude on direction judgments both within a dimension (e.g., a left arrow to the left of fixation is judged faster than a left arrow to the right of fixation) and across dimensions (e.g., judging vertically positioned left and right arrows is more difficult than judging horizontally positioned left and right arrows). There was indirect evidence in these experiments that although the spatial codes for up and down are symmetrical, the codes for left and right may be less so; this in turn could account for the greater difficulty of discriminating left from right.  相似文献   

15.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   

16.
This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which was presented on the same side as the cue tone, was presented after the target tone on the opposite side. This indicates that spatial exogenous attention was aroused by the cue tone, and facilitated subsequent auditory information processing. Experiment 3 examined whether both cue position and frequency influence the resulting information processing. The same effect of spatial attention was observed, but the effect of attention to a certain frequency was only partially observed. In Experiment 4, a tone fusion judgment task was performed to examine whether the effect of spatial attention occurred in the initial stages of hearing. The result suggests that the effect occurred in the later stages of hearing.  相似文献   

17.
Summary Two experiments investigated relative spatial coding in the Simon effect. It was hypothesized that relative spatial coding is carried out with reference to the position of the focus of visual attention. The spatial code for an imperative stimulus presented exactly at the position of focal attention should be neutral on the horizontal plane, and therefore no Simon effect should be observed. However, when the imperative stimulus is presented to the left or to the right of the current position of focal attention, the spatial code should not be neutral, thus producing a Simon effect. In both experiments, focal attention was manipulated either by a peripherally presented onset precue (Experiment 1) or by a centrally presented symbolic precue (Experiment 2). Results showed that the Simon effect was substantially reduced in both experiments when a valid precue preceded the imperative stimulus just in time to conclude refocusing of attention to the position of the imperative stimulus before it was presented. However, conditions with neutral precues yielded a normally sized Simon effect. In both experiments, the Simon effect decreased as the SOA grew when the precue was valid. At least for the Simon effect, the results can be interpreted as evidence that relative spatial coding is functionally related to the position of the focus of attention.  相似文献   

18.
本研究在四个实验中通过应用句子启动范式来考察道德概念空间隐喻的匹配抑制与匹配易化。实验1中,被试先阅读一个含有垂直空间信息的句子,然后立即对随后出现的一个词进行道德词或者不道德词的分类判断。实验2和实验3分别要求被试关注句子中空间信息的终止位置或起始位置。实验4设置了延迟反应,要求被试在句子消失4秒后再进行词汇分类判断。结果显示,前三个实验都出现了明显的道德概念空间隐喻的非绑定性的匹配抑制,即"下-道德"或者"上-不道德",而在实验4中出现了隐喻的非绑定性的匹配易化,即"下-不道德"。这一结果证实空间信息句子的加工确实能够激活道德隐喻。但是由于句子加工时间较长,如果空间信息与道德概念占用相同的资源,就会导致道德空间隐喻的匹配抑制。如果有足够时间加工句子中的空间信息,就能够启动随后的道德概念加工,出现匹配易化。可见,资源的竞争与激活是道德概念空间隐喻出现匹配抑制和匹配易化的关键。  相似文献   

19.
Summary The relative functional significance of attention shifts and attentional zooming for the coding of stimulus position in spatial compatibility tasks is demonstrated by proposing and testing experimentally a tentative explanation of the absence of a Simon effect in Experiment 3 of Umiltà and Liotti (1987). It is assumed that the neutral point of the spatial frame of reference for coding spatial position is at the position where attention is focussed immediately before exposition of the stimulus pattern. If a stimulus pattern is exposed to the right or the left of this position a spatial compatibility effect can be observed when the stimulus-response pairing is incompatible. Generalizing from this, one can say that a spatial compatibility effect will be observed if the last step in attentional focussing of the stimulus attribute specifying the response is a horizontal or a vertical attention shift. If the last step in focussing is attentional zooming (change in the representational level attended to), the stimulus pattern is localized at the horizontal and the vertical positions where the last attention shift had positioned the focus. In this case the spatial code is neutral on these dimensions and so no spatial compatibility effect should result. To test this model we conducted two experiments. Experiment 1 replicated the finding of Umiltà and Liotti that there is no Simon effect in the condition with no delay between a positional cue (two small boxes on the left or right of a fixation cross) and the imperative stimulus, whereas in the condition with a delay of 500 ms a Simon effect was observed. In a comparison condition with a single, rather large cue instead of two small boxes (forcing attention to zoom in), no Simon effect was observed under either delay condition. Experiment 2 used a spatial compatibility task proper with the same experimental conditions as Experiment 1. But in contrast to those of Experiment 1, the results show strong compatibility effects in all cue and delay conditions. The absence of a Simon effect in some experimental conditions in Experiment 1 and the presence of a spatial compatibility effect proper in all conditions in Experiment 2 are consistently accounted for with the proposed attentional explanation of spatial coding and spatial compatibility effects.  相似文献   

20.
Stimulus and task factors as determinants of ear advantages   总被引:1,自引:0,他引:1  
Two dichotic experiments are reported which dissociate stimulus and task factors in perceptual lateralization. With only trajectories of fundamental frequency as a distinguishing cue, perception of the voicing of stop consonants gives a right ear advantage. Identification of the emotional tone of a sentence of natural speech gives a left ear advantage. If such parameters as fundamental frequency variation or overall naturalness of the speech material determined the direction of an ear advantage, the reverse pattern of results would have been obtained. Hence the task appears more important than the nature of the stimulus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号