首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   

2.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   

3.
It has been claimed both that (1) imagery selectivelyinterferes with perception (because images can be confused with similar stimuli) and that (2) imagery selectivelyfacilitates perception (because images recruit attention for similar stimuli). However, the evidence for these claims can be accounted for without postulating either image-caused confusions or attentional set. Interference could be caused by general and modality-specific capacity demands of imaging, and facilitation, by image-caused eye fixations. The experiment reported here simultaneously tested these two apparently conflicting claims about the effect of imagery on perception in a way that rules out these alternative explanations. Subjects participated in a two-alternative forced-choice auditory signal detection task in which the target signal was either the same frequency as an auditory image or a different frequency. The possible effects of confusion and attention were separated by varying the temporal relationship between the image and the observation intervals, since an image can only be confused with a simultaneous signal. We found selective facilitation (lower thresholds) for signals of the same frequency as the image relative to signals of a different frequency, implying attention recruitment; we found no selective interference, implying the absence of confusion. These results also imply that frequency information is represented in images in a form that can interact with perceptual representations.  相似文献   

4.
表象、知觉和记忆是一个整合的认知系统。由于知觉和记忆提供了表象生成的材料,因而三者共享相似的表征,并激活广泛而相似的脑区。然而在认知加工过程上三者存在一定的差异。与知觉相比,表象的编码方式更抽象、更依赖过去经验的参与且处理细节能力更弱;与记忆相比,表象更容易受无关信息的干扰。未来对三者关系的研究应关注不同来源和不同类型的表象与知觉、记忆之间的关系,以及工作记忆在三者关系中所起的作用。  相似文献   

5.
A 47-year-old man with a left temporo-occipital infarct in the area of the posterior cerebral artery is presented. The neuropsychological examination did not reveal aphasia or gross mental deficits. The patient presented with alexia without agraphia, color agnosia, but few visual perceptual deficits. The main impairment was in confrontation naming; he was incapable of naming objects and pictures, not from lack of recognition (excluding visual agnosia) but from lack of access to the appropriate word (optic aphasia). The patient also exhibited a deficit in the evocation of gesture from the visual presentation of an object (optic apraxia) and a difficulty in "conjuring up" visual images of objects (impaired visual imagery) and loss of dreams. The fundamental deficit of this patient is tentatively explained in terms of visuoverbal and visuogestural disconnection and a deficit of mental imagery.  相似文献   

6.
表象的信息表征方式一直是心理学研究的热点问题,脑成像技术在该问题的研究中发挥了巨大作用。本文以初级视觉皮层(V1)在表象表征方式研究中的作用为主线,系统梳理了基于脑成像技术开展的表象实质争论的核心问题,归纳分析了相关争论问题演绎发展的内在逻辑脉络,在此基础上指出了表象研究中需要进一步解决的关键问题,以期能够促进相关研究问题的进一步开展。  相似文献   

7.
The study of visuospatial imagery processes in totally congenitally blind people makes it possible to understand the specific contribution of visual experience for imagery processes. We argue that blind people may have visuospatial imagery processes, but they suffer from some capacity limitations. Similar, although smaller, limitations and individual differences may be found in sighted people. Visuospatial imagery capacity was explored by asking people to follow an imaginary pathway through either two- or three-dimensional matrices of different complexity. The blind appear to use specific visuospatial processes in this task (Experiments 2 and 3), but they have difficulty with three-dimensional matrices; sighted people have no such difficulty with three-dimensional matrices (Experiment 1). On the other hand, when a three-dimensional pattern exceeded sighted capacity, the blind and sighted showed similar patterns of errors. Subsequent analyses suggested that both visuospatial processes and verbal mediation were used.  相似文献   

8.
Based on converging evidence that visual and olfactory images are key components of food cravings, the authors tested a central prediction of the elaborated intrusion theory of desire, that mutual competition between modality-specific tasks and desire-related imagery can suppress such cravings. In each of Experiments 1 and 2, 90 undergraduate women underwent an imaginal food craving induction protocol and then completed either a visual, auditory, or olfactory imagery task. As predicted, the visual and olfactory imagery tasks were superior to the auditory imagery task in reducing participants' craving for food in general (Experiment 1) and for chocolate in particular (Experiment 2). Experiment 3 replicated these findings in a sample of 96 women using a nonimagery craving induction procedure involving a combination of chocolate deprivation and exposure to chocolate cues. Thus, imagery techniques in the visual or olfactory domain hold promise for treating problematic cravings in disordered eating populations.  相似文献   

9.
The hippocampus and memory for "what," "where," and "when"   总被引:2,自引:0,他引:2       下载免费PDF全文
Previous studies have indicated that nonhuman animals might have a capacity for episodic-like recall reflected in memory for "what" events that happened "where" and "when". These studies did not identify the brain structures that are critical to this capacity. Here we trained rats to remember single training episodes, each composed of a series of odors presented in different places on an open field. Additional assessments examined the individual contributions of odor and spatial cues to judgments about the order of events. The results indicated that normal rats used a combination of spatial ("where") and olfactory ("what") cues to distinguish "when" events occurred. Rats with lesions of the hippocampus failed in using combinations of spatial and olfactory cues, even as evidence from probe tests and initial sampling behavior indicated spared capacities for perception of spatial and odor cues, as well as some form of memory for those individual cues. These findings indicate that rats integrate "what," "where," and "when" information in memory for single experiences, and that the hippocampus is critical to this capacity.  相似文献   

10.
It is not known why people move their eyes when engaged in non-visual cognition. The current study tested the hypothesis that differences in saccadic eye movement rate (EMR) during non-visual cognitive tasks reflect different requirements for searching long-term memory. Participants performed non-visual tasks requiring relatively low or high long-term memory retrieval while eye movements were recorded. In three experiments, EMR was substantially lower for low-retrieval than for high-retrieval tasks, including in an eyes closed condition in Experiment 3. Neither visual imagery nor between-task difficulty was related to EMR, although there was some evidence for a minor effect of within-task difficulty. Comparison of task-related EMRs to EMR during a no-task waiting period suggests that eye movements may be suppressed or activated depending on task requirements. We discuss a number of possible interpretations of saccadic eye movements during non-visual cognition and propose an evolutionary model that links these eye movements to memory search through an elaboration of circuitry involved in visual perception.  相似文献   

11.
We assess evidence and arguments brought forward by Tallal (e.g., 1980) and by the target paper (Farmer & Klein, 1995) for a general deficit in auditory temporal perception as the source of phonological deficits in impaired readers. We argue that (1) errors in temporal order judgment of both syllables and tones reflect difficulty in identifying similar (and so readily confusable) stimuli rapidly, not in judging their temporal order; (2) difficulty in identifying similar syllables or tones rapidly stem from independent deficits in speech and nonspeech discriminative capacity, not from a general deficit in rate of auditory perception; and (3) the results of dichotic experiments and studies of aphasics purporting to demonstrate left-hemisphere specialization for nonspeech auditory temporal perception are inconclusive. The paper supports its arguments with data from a recent control study. We conclude that, on the available evidence, the phonological deficit of impaired readers cannot be traced to any co-occurring nonspeech deficits so far observed and is phonetic in origin, but that its full nature, origin, and extent remain to be determined.  相似文献   

12.
It is shown that an irrelevant visual perception interferes more with verbal learning by means of imagery than does an irrelevant auditory perception. The relative interfering effects of these perceptions were reversed in a verbal learning task involving highly abstract materials. Such results implicate the existence of a true visual component in imaginal mediation. A theoretical model is presented in which a visual system and a verbal-auditory system are distinguished. The visual system controls visual perception and visual imagination. The verbal-auditory system controls auditory perception, auditory imagination, internal verbal representation, and speech. Attention can be more easily divided between the two systems than within either one taken by itself. Furthermore, the visual and verbal-auditory systems are functionally linked by information recoding operations. The application of mnemonic imagery appears to involve a recoding of initially verbal information into visual form, and then the encoding of a primarily visual schema into memory. During recall, the schema is decoded as a visual image, and then recoded once again into the verbal-auditory system. Evidence for such transformations is provided not only by the interference data, but also by an analysis of recall-errors made by Ss using mnemonic imagery.  相似文献   

13.
14.
Summary The interaction between perceptual and imaginal processes was investigated, with the use of the repetition-priming paradigm. The idea is that the overlap between processes employed in imagery and processes employed in perception will be reflected in the amount of transfer from one encounter with an item that engages perception or imagery and a second encounter that engages perception or imagery. The greater the overlap between perception and imagery, the greater the transfer between them should be. The results showed that perceptual and imaginal processes transferred maximally to themselves; that is, maximum transfer occurred when an item was processed in the same way on both encounters. Further, prior use of perceptual processes transferred to the use of imaginal processes, but not vice versa. These results are discussed as they relate to the interactive view of imagery, which holds that imagery relies on many of the same mental structures and processes as perception.Some of these data were presented at the Second Workshop on Imagery and Cognition, Padua, Italy, in September 1988.  相似文献   

15.
Research indicates that guided imagery experiences can be mistaken for actual experiences under some circumstances. One explanation for such effects is that memory representations of guided imagery and actual events contain similar phenomenal characteristics such as sensory and contextual details, making the source of the events less distinguishable. This study examined this prediction, comparing memory characteristic ratings for guided imagery experiences with those for memories of perceived and natural imagery events (e.g., fantasies). Results replicated previous findings for the difference between perceived and natural imagery memories. Guided imagery ratings were also lower than those for perceived memories for most sensory details (sound, smell, and taste) and temporal details. However, guided imagery ratings for reflective details were lower than both perceived and natural imagery memory ratings. Thus, guided imagery was similar to natural imagery with respect to sensory details, but similar to perceived memories with respect to reflective details.  相似文献   

16.
Previous work with adults provides evidence that ‘intention’ used in processing simulated actions is similar to that used in planning and processing overt movements. The present study compared young adults and children on their ability to estimate distance reachability using a NOGO/GO paradigm in conditions of imagery only (IO) and imagery with actual execution (IE). Our initial thoughts were that whereas intention is associated with motivation and commitment to act, age-related differences could impact planning. Results indicated no difference in overall accuracy by condition within groups, and as expected adults were more accurate. These findings support an increasing body of evidence suggesting that the neurocognitive processes (in this case, intention) driving motor imagery and overt actions are similar, and as evidenced here, functioning by age 7.  相似文献   

17.
Odor enrichment enhances rats’ ability to discriminate between chemically similar odorants. We show here that this modulation of olfactory perception is accompanied by increases in the density of local inhibitory interneuron expressing Zif268 in response to olfactory stimuli. These changes depend on the overlap of the olfactory bulb activation patterns induced by the enrichment odorants with those induced by the testing odorants, in a manner similar to changes in perception. Moreover, we show that enrichment leads to an alteration of the pattern of Zif268 expression, dependent on the odors used for the enrichment indicating a restructuring of odor representation in the olfactory bulb.  相似文献   

18.
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.  相似文献   

19.
We examined the question—is the intention of completing a simulated motor action the same as the intention used in processing overt actions? Participants used motor imagery to estimate distance reachability in two conditions: Imagery-Only (IO) and Imagery-Execution (IE). With IO (red target) only a verbal estimate using imagery was given. With IE (green target) participants knew that they would actually reach after giving a verbal estimate and be judged on accuracy. After measuring actual maximum reach, used for the comparison, imagery targets were randomly presented across peripersonal- (within reach) and extrapersonal (beyond reach) space. Results indicated no difference in overall accuracy by condition, however, there was a significant distinction by space; participants were more accurate in peripersonal space. Although more research is needed, these findings support an increasing body of evidence suggesting that the neurocognitive processes (in this case, intention) driving motor imagery and overt actions are similar.  相似文献   

20.
What is the relationship between visual perception and visual mental imagery of emotional faces? We investigated this question using a within-emotion perceptual adaptation paradigm in which adaptation to a strong version of an expression was paired with a test face displaying a weak version of the same emotion category. We predicted that within-emotion adaptation to perception and imagery of expressions would generate similar aftereffects, biasing perception of weak emotional test faces toward a more neutral value. Our findings confirmed this prediction. Adaptation to mental images yielded aftereffects that inhibited emotion recognition of test expressions, as participants were less accurate at recognising these stimuli compared to baseline. While the same inhibitory effect was observed when expressions were visually perceived, the size of the aftereffects was greater for perception than imagery. These findings suggest the existence of expression-selective neural mechanisms that subserve both visual perception and visual mental imagery of emotional faces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号