首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

2.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

3.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

4.
Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these effects were independent of the updating process. Patterns of data and participant strategies were inconsistent with the common view of spatial updating as an online process. Instead, the authors concluded that participants formed enduring, long-term memory representations of the layouts at learning that were used to reconstruct spatial information about the layouts as needed (i.e., offline updating). These results support M. Amorim, S. Glasauer, K. Corpinot, and A. Berthoz's (1997) 2-system model of spatial updating that includes both online and offline updating.  相似文献   

5.
When one moves, the spatial relationship between oneself and the entire world changes. Spatial updating refers to the cognitive process that computes these relationships as one moves. In two experiments, we tested whether spatial updating occurs automatically for multiple environments simultaneously. Participants turned relative to either a room or the surrounding campus buildings and then pointed to targets in both the environment in which they turned (updated environment) and the other environment (nonupdated environment). The participants automatically updated the room targets when they moved relative to the campus, but they did not update the campus targets when they moved relative to the room. Thus, automatic spatial updating depends on the nature of the environment. Implications for theories of spatial learning and the structure of human spatial representations are discussed.  相似文献   

6.
In 3 experiments, the question of viewpoint dependency in mental representations of dynamic scenes was addressed. Participants viewed film clips of soccer episodes from 1 or 2 viewpoints; they were then required to discriminate between video stills of the original episode and distractors. Recognition performance was measured in terms of accuracy and speed. The degree of viewpoint deviation between the initial presentation and the test stimuli was varied, as was both the point of time presented by the video stills and participants' soccer expertise. Findings suggest that viewers develop a viewpoint-dependent mental representation similar to the spatial characteristics of the original episode presentation, even if the presentation was spatially inhomogeneous.  相似文献   

7.
From face recognition studies, it is known that instructions are able to change processing orientation of stimuli, leading to an impairment of recognition performance. The present study examined instructional influences on the visual recognition of dynamic scenes. A global processing orientation without any instruction was assumed to lead to highest recognition performance, whereas instructions focusing participants' attention on certain characteristics of the event should lead to a local processing orientation with an impairment of visual recognition performance as a direct consequence. Since the pattern of results provided evidence for this hypothesis, theoretical contributions were discussed.  相似文献   

8.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

9.
Viewers remember seeing information from outside the boundaries of a scene (boundary extension; BE). To determine if view-boundaries have a special status in scene perception, we sought to determine if object-boundaries would yield the same effect. In Experiment 1 eight “bird's-eye view” photographs containing single object clusters (a smaller object on top of a larger one) were presented. After the presentation, participants reconstructed four scenes by selecting among five different-sized cutouts of each object. BE occurred between the view-boundaries and the object cluster, but not between the smaller object and the larger object's boundaries. There was no consistent effect of the larger object's boundaries. Experiment 2 replicated these results using a drawing task. BE does not occur whenever a border surrounds an object, it occurs when the border signifies the edge of the view. We propose the BE reflects anticipatory representation of scene structure that supports scene comprehension and view integration.  相似文献   

10.
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category—that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.  相似文献   

11.
12.
Experimental data coming from visual cognitive sciences suggest that visual analysis starts with a parallel extraction of different visual attributes at different scales/frequencies. Neuropsychological and functional imagery data have suggested that each hemisphere (at the level of temporo-parietal junctions-TPJ) could play a key role in spatial frequency processing: The right TPJ should predominantly be involved in low spatial frequency (LFs) analysis and the left TPJ in high spatial frequency (HFs) analysis. Nevertheless, this functional hypothesis had been inferred from data obtained when using the hierarchical form paradigm, without any explicit spatial frequency manipulation per se. The aims of this research are (i) to investigate, in healthy subjects, the hemispheric asymmetry hypothesis with an explicit manipulation of spatial frequencies of natural scenes and (ii) to examine whether the 'precedence effect' (the relative rapidity of LFs and HFs processing) depends on the visual field of scene presentation or not. For this purpose, participants were to identify either non-filtered or LFs and HFs filtered target scene displayed either in the left, central, or right visual field. Results showed a hemispheric specialization for spatial frequency processing and different 'precedence effects' depending on the visual field of presentation.  相似文献   

13.
Two experiments tested the hypothesis that indirect false-belief tests allow participants to track a protagonist’s perspective uninterruptedly, whereas direct false-belief tests disrupt the process of perspective tracking in various ways. For this purpose, adults’ performance was compared on indirect and direct false-belief tests by means of continuous eye-tracking. Experiment 1 confirmed that the false-belief question used in direct tests disrupts perspective tracking relative to what is observed in an indirect test. Experiment 2 confirmed that perspective tracking is a continuous process that can be easily disrupted in adults by a subtle visual manipulation in both indirect and direct tests. These results call for a closer analysis of the demands of the false-belief tasks that have been used in developmental research.  相似文献   

14.
The current study investigated the reference frame used in spatial updating when idiothetic cues to self-motion were minimized (desktop virtual reality). In Experiment 1, participants learned a layout of eight objects from a single perspective (learning heading) in a virtual environment. After learning, they were placed in the same virtual environment and used a keyboard to navigate to two of the learned objects (visible) before pointing to a third object (invisible). We manipulated participants’ starting orientation (initial heading) and final orientation (final heading) before pointing, to examine the reference frame used in this task. We found that participants used the initial heading and the learning heading to establish reference directions. In Experiment 2, the procedure was almost the same as in Experiment 1 except that participants pointed to objects relative to an imagined heading that differed from their final heading in the virtual environment. In this case, pointing performance was only affected by alignment with the learning heading. We concluded that the initial heading played an important role in spatial updating without idiothetic cues, but the representation established at this heading was transient and affected by the interruption of spatial updating; the learning heading, on the other hand, corresponded to an enduring representation which was used consistently.  相似文献   

15.
Féry YA  Magnac R  Israël I 《Cognition》2004,91(2):B1-10
In conditions of slow passive transport without vision, even tenuous inertial signals from semi-circular canals and the haptic-kinaesthetic system should provide information about changes relative to the environment provided that it is possible to command the direction of the body's movements voluntarily. Without such control, spatial updating should be impaired because incoming signals cannot be compared to the expected sensory consequences provided by voluntary command. Participants were seated in a rotative robot (Robuter) and learnt the positions of five objects in their surroundings. They were then blindfolded and assigned either to the active group (n=7) or to the passive group (n=7). Members of the active group used a joystick to control the direction of rotation of the robot. The acceleration (25 degrees /s2) and plateau velocity (9 degrees /s) were kept constant. The participants of the passive group experienced the same stimuli passively. After the rotations, the participants had to point to the objects whilst blindfolded. Participants in the active group significantly outperformed the participants in the passive group. Thus, even tenuous inertial cues are useful for spatial updating in the absence of vision, provided that such signals are integrated as feedback associated with intended motor command.  相似文献   

16.
We report two experiments designed to investigate the potential use of vibrotactile warning signals to present spatial information to car drivers. Participants performed an attention-demanding rapid serial visual presentation (RSVP) monitoring task. Meanwhile, whenever they felt a vibrotactile stimulus presented on either their front or back, they had to check the front and the rearview mirror for the rapid approach of a car, and brake or accelerate accordingly. We investigated whether speeded responses to potential emergency driving situations could be facilitated by the presentation of spatially-predictive (80% valid; Experiment 1) or spatially-nonpredictive (50% valid; Experiment 2) vibrotactile cues. Participants responded significantly more rapidly following both spatially-predictive and spatially-nonpredictive vibrotactile cues from the same rather than the opposite direction as the critical driving events. These results highlight the potential utility of vibrotactile warning signals in automobile interface design for directing a driver’s visual attention to time-critical events or information.  相似文献   

17.
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.  相似文献   

18.
Laurent Itti 《Visual cognition》2013,21(6):1093-1123
We investigated the contribution of low-level saliency to human eye movements in complex dynamic scenes. Eye movements were recorded while naive observers viewed a heterogeneous collection of 50 video clips (46,489 frames; 4-6 subjects per clip), yielding 11,916 saccades of amplitude ≥2°. A model of bottom-up visual attention computed instantaneous saliency at the instant each saccade started and at its future endpoint location. Median model-predicted saliency was 45% the maximum saliency, a significant factor 2.03 greater than expected by chance. Motion and temporal change were stronger predictors of human saccades than colour, intensity, or orientation features, with the best predictor being the sum of all features. There was no significant correlation between model-predicted saliency and duration of fixation. A majority of saccades were directed to a minority of locations reliably marked as salient by the model, suggesting that bottom-up saliency may provide a set of candidate saccade target locations, with the final choice of which location of fixate more strongly determined top-down.  相似文献   

19.
Three studies examined effects of different response measures on spatial updating during self-rotation. In Experiment 1, participants located objects in an array with a pointer after physical self-rotation, imagined self-rotation, and a rotation condition in which they ignored superfluous sensorimotor signals. In line with previous research, updating performance was found to be superior in the physical self-rotation condition compared with the other 2. In Experiment 2, participants performed in identical rotation movement conditions but located objects by verbal labeling rather than pointing. Within the verbal modality, an advantage for updating during imagined self-rotation was found. In Experiment 3, participants performed physical and imagined self-rotations only and used a pointing response offset from their physical reference frames. Performance was again superior during imagined self-rotations. The results suggest that it is not language processing per se that improves updating performance but rather a general reduction of the conflict between physical and projected egocentric reference frames.  相似文献   

20.
Previous research suggests that understanding the gist of a scene relies on global structural cues that enable rapid scene categorization. This study used a repetition blindness (RB) paradigm to interrogate the nature of the scene representations used in such rapid categorization. When stimuli are repeated in a rapid serial visual presentation (RSVP) sequence (~10 items/sec), the second occurrence of the repeated item frequently goes unnoticed, a phenomenon that is attributed to a failure to consolidate two conscious episodes (tokens) for a repeatedly activated type. We tested whether RB occurs for different exemplars of the same scene category, which share conceptual and broad structural properties, as well as for identical and mirror-reflected repetitions of the same scene, which additionally share the same local visual details. Across 2 experiments, identical and mirror-image scenes consistently produced a repetition facilitation, rather than RB. There was no convincing evidence of either RB or repetition facilitation for different members of a scene category. These findings indicate that in the first 100–150 ms of processing scenes are represented in terms of local visual features, rather than more abstract category-general features, and that, unlike other kinds of stimuli (words or objects), scenes are not susceptible to token individuation failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号