首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The crossmodal congruency effect (CCE) is augmented when viewing an image of a hand compared to an object. It is unclear if this contextual effect extends to a non-spatial CCE. Here, participants discriminated the number of tactile vibrations delivered to the hand whilst ignoring visual distractors on images of their own or another’s hand or an object. The CCE was not modulated by stimulus context. Viewing one’s hand from a third person perspective increased errors relative to viewing an object (Experiment 1). Errors were reduced when viewing hands, from first or third person perspectives, with additional identity markers (Experiments 2 and 3). Our results suggest no effect of context on the non-spatial CCE and that differences in task performance between hand and object images depend on their visual properties. These findings are discussed in light of the relationship between body representation and perception of body-centred stimuli in the temporal domain.  相似文献   

2.
The speed and accuracy of perceptual recognition of a briefly presented picture of an object is facilitated by its prior presentation. Picture priming tasks were used to assess whether the facilitation is a function of the repetition of: (a) the object's image features (viz., vertices and edges), (b) the object model (e.g., that it is a grand piano), or (c) a representation intermediate between (a) and (b) consisting of convex or singly concave components of the object, roughly corresponding to the object's parts. Subjects viewed pictures with half their contour removed by deleting either (a) every other image feature from each part, or (b) half the components. On a second (primed) block of trials, subjects saw: (a) the identical image that they viewed on the first block, (b) the complement which had the missing contours, or (c) a same name-different exemplar of the object class (e.g., a grand piano when an upright piano had been shown on the first block). With deletion of features, speed and accuracy of naming identical and complementary images were equivalent, indicating that none of the priming could be attributed to the features actually present in the image. Performance with both types of image enjoyed an advantage over that with the different exemplars, establishing that the priming was visual rather than verbal or conceptual. With deletion of the components, performance with identical images was much better than that with their complements. The latter were equivalent to the different exemplars, indicating that all the visual priming of an image of an object is through the activation of a representation of its components in specified relations. In terms of a recent neural net implementation of object recognition (Hummel & Biederman, in press), the results suggest that the locus of object priming may be at changes in the weight matrix for a geon assembly layer, where units have self-organized to represent combinations of convex or singly concave components (or geons) and their attributes (e.g., aspect ratio, orientation, and relations with other geons such as TOP-OF). The results of these experiments provide evidence for the psychological reality of intermediate representations in real-time visual object recognition.  相似文献   

3.
Do the mental images of 3-dimensional objects recreate the depth characteristics of the original objects? This investigation of the characteristics of mental images utilized a novel boundary-detection task that required participants to relate a pair of crosses to the boundary of an image mentally projected onto a computer screen. 48 female participants with body attitudes within expected normal range were asked to image their own body and a familiar object from the front and the side. When the visual mental image was derived purely from long-term memory, accuracy was better than chance for the front (64%) and side (63%) of the body and also for the front (55%) and side (68%) of the familiar nonbody object. This suggests that mental images containing depth and spatial information may be generated from information held in long-term memory. Pictorial exposure to views of the front or side of the objects was used to investigate the representations from which this 3-dimensional shape and size information is derived. The results are discussed in terms of three possible representational formats and argue that a front-view 2 1/2-dimensional representation mediates the transfer of information from long-term memory when depth information about the body is required.  相似文献   

4.
空间表征对于盲人定向行走具有重要意义。以往研究者集中探讨早期视觉经验缺失对于盲人空间表征的影响, 较少探究空间表征建构的过程性因素, 即探路策略对于盲人空间表征的影响。本研究采用现场实验的方法探究了盲人建构陌生环境空间表征采用的策略及其作用。结果发现:视觉经验缺失对盲人陌生环境空间表征能力造成了不利影响, 然而, 被试运用有效的策略可以弥补视觉缺失带来的损失, 且利用空间关系策略的个体所建构的空间表征更为精确, 定向行走效率也更高。  相似文献   

5.
PERCEIVED CONTINUITY OF OCCLUDED VISUAL OBJECTS   总被引:2,自引:0,他引:2  
Abstract— The human visual system does not rigidly preserve the properties of the retinal image as neural signals are transmitted to higher areas of the brain Instead, it generates a representation that captures stable surface properties despite a retinal image that is often fragmented in space and time because of occlusion caused by object and observer motion The recovery of this coherent representation depends at least in part on input from an abstract representation of three-dimensional (3-D) surface layout In the two experiments reported, a stereoscopic apparent motion display was used to investigate the perceived continuity of a briefly interrupted visual object When a surface appeared in front of the object's location during the interruption, the object was more likely to be perceived as persisting through the interruption (behind an occluder) than when the surface appeared behind the object's location under otherwise identical stimulus conditions The results reveal the influence of 3-D surface-based representations even in very simple visual tasks.  相似文献   

6.
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust.  相似文献   

7.
We tested whether dogs have a cross-modal representation of human individuals. We presented domestic dogs with a photo of either the owner's or a stranger's face on the LCD monitor after playing back a voice of one of those persons. A voice and a face matched in half of the trials (Congruent condition) and mismatched in the other half (Incongruent condition). If our subjects activate visual images of the voice, their expectation would be contradicted in Incongruent condition. It would result in the subjects’ longer looking times in Incongruent condition than in Congruent condition. Our subject dogs looked longer at the visual stimulus in Incongruent condition than in Congruent condition. This suggests that dogs actively generate their internal representation of the owner's face when they hear the owner calling them. This is the first demonstration that nonhuman animals do not merely associate auditory and visual stimuli but also actively generate a visual image from auditory information. Furthermore, our subject also looked at the visual stimulus longer in Incongruent condition in which the owner's face followed an unfamiliar person's voice than in Congruent condition in which the owner's face followed the owner's voice. Generating a particular visual image in response to an unfamiliar voice should be difficult, and any expected images from the voice ought to be more obscure or less well defined than that of the owners. However, our subjects looked longer at the owner's face in Incongruent condition than in Congruent condition. This may indicate that dogs may have predicted that it should not be the owner when they heard the unfamiliar person's voice.  相似文献   

8.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

9.
It is often intuitively assumed that disconnected image fragments result in a representation of separate objects. When objects are partly occluded, disconnected image fragments can still result in a representation of a single object, based on visual completion. In a simultaneous matching task, displays showing one object, partly occluded objects, or two objects were compared with each other. When only a translation was required to match pairs of displays, one-object displays were matched faster than both occluded-object and two-object displays, which did not differ significantly from each other. When mental rotation and translation were required, the one-object displays were again matched the fastest. In addition, an advantage for occluded-object displays compared with two-object displays was found. We conclude that when the generation of a mental representation is likely, object-based connectedness determines object matching. Mental rotation then seems to depend on the number of objects rather than on the number of image fragments.  相似文献   

10.
We argue that task requirements can be the determinant in generating different results in studies on visual object recognition. We investigated priming for novel visual objects in three implicit memory tasks. A study-test design was employed in which participants first viewed line drawings of unfamiliar objects and later made different decisions about structural aspects of the objects. Priming for both symmetric and asymmetric possible objects was observed in a task requiring a judgment of structural possibility. However, when the task was changed to one requiring a judgment of structural symmetry, only symmetric possible objects showed priming. Finally, in a matching task in which participants made a same-different judgment, only symmetric possible objects exhibited priming. These results suggest that an understanding of object representation will be most fruitful if it is based on careful analyses of both the task demands and their interaction(s) with encoding and retrieval processes.  相似文献   

11.
Since 1960, more than 37 long-term experiments have been performed in Japan concerning up-down and left-right reversal and inversion, yet these contributions to a field of study pioneered by Stratton 100 years ago remain unfamiliar outside Japan, as most articles have not appeared in English. Japanese researchers have focused on several basic elements of perceptual systems, such as intersensory relationships, sensory-motor coordination, spatial frames of reference, and position constancy. Although subjects have not adapted perfectly to a transposed world in the course of these two- or three-week scale experiments, important data, particularly concerning left-right reversed vision, have been generated. I, myself, have conducted all three types of visual transposition experiments a number of times and propose a model comprised of three subsystems: that is, top-down, bottom-up, and sensory-motor systems. Central to my theory is the notion that the essential change has occurred in the subject's own body image, and that the spatial representation of the body image is visual in nature, not non-visual proprioception (as deduced from Harris' (1965) hypothesis).  相似文献   

12.
The human visual system possesses a remarkable ability to reconstruct the shape of an object that is partly occluded by an interposed surface. Behavioral results suggest that, under some circumstances, this perceptual process (termed amodal completion) progresses from an initial representation of local image features to a completed representation of a shape that may include features that are not explicitly present in the retinal image. Recent functional magnetic resonance imaging (fMRI) studies have shown that the completed surface is represented in early visual cortical areas. We used fMRI adaptation, combined with brief, masked exposures, to track the amodal completion process as it unfolds in early visual cortical regions. We report evidence for an evolution of the neural representation from the image-based feature representation to the completed representation. Our method offers the possibility of measuring changes in cortical activity using fMRI over a time scale of a few hundred milliseconds.  相似文献   

13.
One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention.  相似文献   

14.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

15.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

16.
Six Ss discriminated seven-letter nonsense words from comparison words. Target and comparison words differed on randomly selected trials by one randomly chosen letter. Target words were displayed for 50, 55, 60, 70, 90, or 200 msec, and were preceded and followed by a masking field. In one condition the Ss were familiarized with the comparison words, and in another they were not. Discrimination was better for familiar words at all display durations. There was an interaction between familiarity and the letter position effect. For unfamiliar words the typical bow-shaped position effect occurred. For familiar words no marked position effect occurred. An identification condition using unfamiliar words found no interaction between letter position and display duration. The results are interpreted as evidence that familiarity removes a letter position effect that depends upon serial transfer from a nonmaskable mediating visual representation that is constructed from a maskable representation by nonserial processes.  相似文献   

17.
The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of new objects with changes occurring at already present objects--luminance change, color change, and object offset. Results showed that appearance of a new object was less susceptible to change blindness than changes that old objects could undergo. The authors also investigated whether it is onset per se that leads to enhanced detectability or onset of an object representation. Results showed that the onset advantage was eliminated for onsets that did not correspond with the appearance of a new object. These findings suggest that the visual system is particularly sensitive to the onset of a new object.  相似文献   

18.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

19.
近10年来, 越来越多的研究表明, 对可操作物体的识别不仅依赖物体的视觉信息, 同时也依赖操作它的动作信息和知识, 即可操作性。来自行为实验、脑成像和脑损伤病人的研究都表明, 物体的可操作性在物体识别中会被激活, 并起着重要的作用。可操作性的研究不但重新解释了生物与非生物、名词与动词的分离现象, 而且对于研究物体表征, 以及视觉物体识别的神经通路有着重要的理论意义。  相似文献   

20.
Viewpoint-dependent recognition of familiar faces   总被引:5,自引:0,他引:5  
Troje NF  Kersten D 《Perception》1999,28(4):483-487
The question whether object representations in the human brain are object-centered or viewer-centered has motivated a variety of experiments with divergent results. A key issue concerns the visual recognition of objects seen from novel views. If recognition performance depends on whether a particular view has been seen before, it can be interpreted as evidence for a viewer-centered representation. Earlier experiments used unfamiliar objects to provide the experimenter with complete control over the observer's previous experience with the object. In this study, we tested whether human recognition shows viewpoint dependence for the highly familiar faces of well-known colleagues and for the observer's own face. We found that observers are poorer at recognizing their own profile, whereas there is no difference in response time between frontal and profile views of other faces. This result shows that extensive experience and familiarity with one's own face is not sufficient to produce viewpoint invariance. Our result provides strong evidence for viewer-centered representations in human visual recognition even for highly familiar objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号