首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 19 毫秒
1.
Research on the lateralisation of brain functions for emotion has yielded different results as a function of whether it is the experience, expression, or perceptual processing of emotion that is examined. Further, for the perception of emotion there appear to be differences between the processing of verbal and nonverbal stimuli. The present research examined the hemispheric asymmetry in the processing of verbal stimuli varying in emotional valence. Participants performed a lexical decision task for words varying in affective valence (but equated in terms of arousal) that were presented briefly to the right or left visual field. Participants were significantly faster at recognising positive words presented to the right visual field/left hemisphere. This pattern did not occur for negative words (and was reversed for high arousal negative words). These results suggest that the processing of verbal stimuli varying in emotional valence tends to parallel hemispheric asymmetry in the experience of emotion.  相似文献   

2.
In two experiments, event-related potentials at left and right occipital, parietal, and temporal sites were studied in 16 left-handers (8 male, 8 female) and 16 right-handers (8 male, 8 female). Subjects displayed extreme handedness, had a normal writing hand position, reported no left-handed relatives, and reported no perinatal traumata. In Experiment 1, centrally presented words had to be read, and nonverbal stimuli had to be matched. Condition-dependent asymmetries were found for P340 and SW components. Word-reading elicited a N500 component, whereas figure-matching did not. In Experiment 2, words presented to either the left or right visual field had to be read. It was found that N160 measures were larger, and P240, P400, and SW measures were smaller to words presented in the contralateral visual field compared to words in the ipsilateral field. Sex affected these pathway effects. In both experiments, hand preference did not significantly influence the ERP results.  相似文献   

3.
Research on the lateralisation of brain functions for emotion has yielded different results as a function of whether it is the experience, expression, or perceptual processing of emotion that is examined. Further, for the perception of emotion there appear to be differences between the processing of verbal and nonverbal stimuli. The present research examined the hemispheric asymmetry in the processing of verbal stimuli varying in emotional valence. Participants performed a lexical decision task for words varying in affective valence (but equated in terms of arousal) that were presented briefly to the right or left visual field. Participants were significantly faster at recognising positive words presented to the right visual field/left hemisphere. This pattern did not occur for negative words (and was reversed for high arousal negative words). These results suggest that the processing of verbal stimuli varying in emotional valence tends to parallel hemispheric asymmetry in the experience of emotion.  相似文献   

4.
Three experiments dealing with hemispheric specialization are presented. In Experiment 1, words and/or faces were presented tachistoscopically to the left or right of fixation. Words were more accurately identified in the right visual field and faces were more accurately identified in the left visual field. A forced choice error analysis for words indicated that errors made for word stimuli were most frequently visually similar words and this effect was particularly pronounced in the left visual field. Two additional experiments supported this finding. On the basis of the results, it was argued that word identification is a multistage process, with visual feature analysis carried out by the right hemisphere and identification and naming by the left hemisphere. In addition, Kinsbourne's attentional model of brain function was rejected in favor of an anatomical model which suggests that simultaneous processing of verbal and nonverbal information does not constrict the attention of either hemisphere.  相似文献   

5.
Recent research has demonstrated that memory for words elicits left hemisphere activation, faces right hemisphere activation, and nameable objects bilateral activation. This pattern of results was attributed to dual coding of information, with the left hemisphere employing a verbal code and the right a nonverbal code. Nameable objects can be encoded either verbally or nonverbally and this accounts for their bilateral activation. We investigated this hypothesis in a callosotomy patient. Consistent with dual coding, the left hemisphere was superior to the right in memory for words, whereas the right was superior for faces. Contrary to prediction, performance on nameable pictures was not equivalent in the two hemispheres, but rather resulted in a right hemisphere superiority. In addition, memory for pictures was significantly better than for either words or faces. These findings suggest that the dual code hypothesis is an oversimplification of the processing capabilities of the two hemispheres.  相似文献   

6.
The masked-priming lexical decision task has been the paradigm of choice for investigating how readers code for letter identity and position. Insight into the temporal integration of information between prime and target words has pointed out, among other things, that readers do not code for the absolute position of letters. This conception has spurred various accounts of the word recognition process, but the results at present do not favor one account in particular. Thus, employing a new strategy, the present study moves out of the arena of temporal- and into the arena of spatial information integration. We present two lexical decision experiments that tested how the processing of six-letter target words is influenced by simultaneously presented flanking stimuli (each stimulus was presented for 150 ms). We manipulated the orthographic relatedness between the targets and flankers, in terms of both letter identity (same/different letters based on the target’s outer/inner letters) and letter position (intact/reversed order of letters and of flankers, contiguous/noncontiguous flankers). Target processing was strongly facilitated by same-letter flankers, and this facilitatory effect was modulated by both letter/flanker order and contiguity. However, when the flankers consisted of the target’s inner-positioned letters alone, letter order no longer mattered. These findings suggest that readers may code for the relative position of letters using words’ edges as spatial points of reference. We conclude that the flanker paradigm provides a fruitful means to investigate letter-position coding in the fovea and parafovea.  相似文献   

7.
Two experiments tested recognition memory for rapidly presented stimuli. In Experiment I 16 words were presented at exposure times ranging from 25 to 500 msec followed by a yes-no recognition test. The results showed a strong dependence of memory performance on both exposure time and serial position. In Experiment II 16 random forms were presented at exposure times ranging from 125 to 2000 msec followed by a yes-no recognition test. Results for random forms showed that memory performance was strongly dependent on exposure time but not on serial position. Taken together, the results of Experiments I and II suggest qualitative encoding differences between verbal vs nonverbal stimuli.  相似文献   

8.
High and low visual imagers, defined as such primarily on the basis of spatial manipulation test performance, were required to identify tachistoscopically-presented pictures, concrete words, and abstract words varying in familiarity. Two recognition paradigms were employed, recognition threshold and recognition latency. High imagers were faster in picture recognition under both paradigms when a nonverbal set or strategy was primed and when pictures were relatively unfamiliar in the threshold paradigm. No relationship was found between imagery ability and word recognition in the visual modality, nor was visual imagery ability related to the auditory recognition of verbal and nonverbal stimuli, such as words and environmental sounds. Commonalities between these findings and others in the imagery ability literature were noted.  相似文献   

9.
Nonverbal (imagery) materials become more effective than verbal materials in aiding memory as age increases; indeed, children under five years have shown superior memory for verbal over nonverbal materials. The present study points out and changes four commonalities in the design of studies finding this latter relationship in an attempt to determine if nonverbal materials would prove superior to verbal ones. Four six-item paired-associate lists were presented individually using a study-test procedure. Presentation of the lists involved either pictures, words, or both. Recognition was tested either verbally or visually.Results indicated that the combined visual-verbal study materials produced performance superior to visual materials alone, which in turn were superior to verbal materials alone. Recognition of pictures was superior to recognition of words, regardless of mode of input. The relationship of these results to the procedural changes made are discussed, along with implications for current hypotheses of children's use of imagery.  相似文献   

10.
Hemispheric specialization for processing different types of rapidly exposed stimuli was examined in a forced choice reaction time task. Four conditions of recognition were included: tacial emotion, neutral faces, emotional words, and neutral words. Only the facial emotion condition produced a significant visual field advantage (in favor of the left visual field), but this condition did not differ significantly from the neutral face condition's left visual field superiority. The verbal conditions produced significantly decreased latencies with RVF presentation, while the LVF presentation was associated with decreased latencies on the facial conditions. These results suggested that facial recognition and affective processing cannot be separated as independent factors generating right hemisphere superiority for facial emotion perception, and that task parameters (verbal vs. nonverbal) are important influences upon effects in studies of cerebral specialization.  相似文献   

11.
Recently, it was proposed that the Simon effect would result not only from two interfering processes, as classical dual-route models assume, but from three processes. It was argued that priming from the spatial code to the nonspatial code might facilitate the identification of the nonspatial stimulus feature in congruent Simon trials. In the present study, the authors provide evidence that the identification of the nonspatial information can be facilitated by the activation of an associated spatial code. In three experiments, participants first associated centrally presented animal and fruit pictures with spatial responses. Subsequently, participants decided whether laterally presented letter strings were words (animal, fruit, or other words) or nonwords; stimulus position could be congruent or incongruent to the associated spatial code. As hypothesized, animal and fruit words were identified faster at congruent than at incongruent stimulus positions from the association phase. The authors conclude that the activation of the spatial code spreads to the nonspatial code, resulting in facilitated stimulus identification in congruent trials. These results speak to the assumption of a third process involved in the Simon task.  相似文献   

12.
Adult subjects in two experiments were presented pairs of stimuli that differed in varying degree on an abstract semantic attribute, and were required to choose the one with the higher value on the given dimension. Subjects in Experiment 1 chose the more pleasant member of a pair of pictures, concrete nouns, or abstract nouns. Those in Experiment 2, presented a pair of pictures or concrete nouns, chose the one whose referent had the higher monetary value. Theoretical interest centered on the effects of semantic distance, stimulus mode, and individual differences in imagery and verbal ability on choice time. In both experiments, response times (1) decreased with increases in semantic distance, (2) were faster for pictures than words (and for concrete than abstract words in Experiment 1), and (3) were faster for high- than for lowimagery participants. The results are completely consistent with a dual-coding (image vs. verbal) interpretation: Pleasantness and value, though conceptually abstract, are attributes of things rather than words, and they are accordingly represented in and processed by a system specialized for dealing with nonverbal information.  相似文献   

13.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

14.
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese–English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.  相似文献   

15.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

16.
This study was aimed at testing a new approach for examination of functional laterality based on hemispheric specialization. The subjects had to perform verbal (words/nonwords) and nonverbal (similar/different patterns) discrimination. The separation of the two hemispheres during information processing was realized by requiring a simultaneous response of both index fingers. The obtained over-all reaction times (RT) were faster for verbal than for pattern tasks. Considering the RTs for solely the particular, faster response of one or the other index finger, the right index finger turned out to be faster on verbal tasks whereas the left one dominated on pattern tasks. According to the hypothesis that the faster hand indicates the more active (contralateral) hemisphere, it can be assumed that words are responded to more quickly when processed in the left hemisphere. On the other hand, patterns are responded to more quickly when the right hemisphere is active. These results suggest that each hemisphere may be capable of processing verbal and nonverbal material; the speed of information processing, however, is faster in the more adept one.  相似文献   

17.
Two experiments were conducted to assess the referential function of chimpanzee (Pan troglodytes) gestures to obtain food. The chimpanzees received 1 trial per condition. In Experiment 1 (N = 101), in full view of the chimpanzee, a banana was placed on top of 1 of 2 inverted buckets or was hidden underneath 1 of the buckets. In Experiment 2 (N = 35), 4 conditions were presented in constant order: (a) no food, no observer; (b) no food, observer present; (c) food present, no observer; and (d) food present, observer present. Gestures and visual orienting were used socially and referentially. The capacity for nonverbal reference may predate the Hominidae-Pongidae split, and the development of nonverbal reference may be independent of human species-specific adaptations for speech.  相似文献   

18.
In affective Simon studies, participants are to select between a positive and negative response on the basis of a nonaffective stimulus feature (i.e., relevant stimulus feature) while ignoring the valence of the presented stimuli (i.e., irrelevant stimulus feature). De Houwer and Eelen (1998) showed that the time to select the correct response is influenced by the match between the valence of the response and the (irrelevant) valence of the stimulus. In the affective Simon studies that have been reported until now, only words were used as stimuli and the relevant stimulus feature was always the grammatical category of the words. We report four experiments in which we examined the generality of the affective Simon effect. Significant affective Simon effects were found when the semantic category, grammatical category, and letter-case of words was relevant, when the semantic category of photographed objects was relevant, and when participants were asked to give nonverbal approach or avoidance responses on the basis of the grammatical category of words. Results also showed that the magnitude of the affective Simon effect depended on the nature of the relevant feature.  相似文献   

19.
What speaks louder, false words or false action? Raters assessed the anxiety level of 10 actors portraying their actual anxiety level and simulated displays of high anxiety. Raters were required to base judgments on either video cues alone or audio cues alone. Findings indicate that false words speak louder than false action, with audio-based judgments generating greater judgmental error in both straight and dissembled anxiety conditions. Although raters expressed equal confidence in judgments based on either verbal or nonverbal cues, results indicated that verbal cues played a larger role in emotional deceit. Differences between real and simulated anxiety cues were delineated, suggesting ways of detecting emotional deception. Results were discussed in light of current thought regarding channel contribution in deception.  相似文献   

20.
Ss in three experiments searched through an array of pictures or words for a target item that had been presented as a picture or a word. In Experiments I and II, the pictures were line drawings of familiar objects and the words were their printed labels; in Experiment III, the stimuli were photographs of the faces of famous people and their corresponding printed names. Search times in Experiments I and II were consistently faster when the array items were pictures than when they were words, regardless of the mode of the target items. Search was also faster with pictures than with words as targets when the search array also consisted of pictures, but target mode had no consistent effect with words as array items. Experiment III yielded a completely different pattern of results: Search time with names as targets and faces as search array items was significantly slower than in the other three conditions, which did not differ from each other. Considered in relation to several theories, the results are most consistent with a dual-coding interpretation. That is, items that are cognitively represented both verbally and as nonverbal images can be searched and compared in either mode, depending on the demands of the task. The mode actually used depends on whether the search must be conducted through an array of pictures or words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号