首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

2.
Experimental data coming from visual cognitive sciences suggest that visual analysis starts with a parallel extraction of different visual attributes at different scales/frequencies. Neuropsychological and functional imagery data have suggested that each hemisphere (at the level of temporo-parietal junctions-TPJ) could play a key role in spatial frequency processing: The right TPJ should predominantly be involved in low spatial frequency (LFs) analysis and the left TPJ in high spatial frequency (HFs) analysis. Nevertheless, this functional hypothesis had been inferred from data obtained when using the hierarchical form paradigm, without any explicit spatial frequency manipulation per se. The aims of this research are (i) to investigate, in healthy subjects, the hemispheric asymmetry hypothesis with an explicit manipulation of spatial frequencies of natural scenes and (ii) to examine whether the 'precedence effect' (the relative rapidity of LFs and HFs processing) depends on the visual field of scene presentation or not. For this purpose, participants were to identify either non-filtered or LFs and HFs filtered target scene displayed either in the left, central, or right visual field. Results showed a hemispheric specialization for spatial frequency processing and different 'precedence effects' depending on the visual field of presentation.  相似文献   

3.
Spatial language influences memory for spatial scenes   总被引:1,自引:0,他引:1  
Does language influence recognition for spatial scenes? In Experiments 1 and 2, participants viewed ambiguous pictures, with or without spatial sentences. In a yes—no recognition task, only the spatial sentences group made more false alarms toward the center of the spatial category than in the other direction; three other comparison groups showed no such tendency. This shift toward the core of the semantic category suggests that spatial language interacted with perceptual information during encoding. In Experiment 3, we varied the materials to test the interactive encoding account against a separate encoding account in which separately stored sentences are accessed during picture recognition. The results support the interactive encoding account in which spatial language influences the encoding and memory of spatial relations.  相似文献   

4.
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli.  相似文献   

5.
The effect of varying information for overall depth in a simulated 3-D scene on the perceived layout of objects in the scene was investigated in two experiments. Subjects were presented with displays simulating textured surfaces receded in depth. Pairs of markers were positioned at equal intervals within the scenes. The subject's task was to judge the depth between the intervals. Overall scene depth was varied by viewing through either a collimating lens or a glass disk. Judged depth for equal depth intervals decreased with increasing distance of the interval from the front of the scene. Judged depth was greater for collimated than for non-collimated viewing. Interestingly, collimated viewing resulted in a uniform rescaling of the perceived depth intervals.  相似文献   

6.
Parafoveal semantic processing of emotional visual scenes   总被引:2,自引:0,他引:2  
The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1 degrees or 2.5 degrees of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay). Results indicated that (a) the first fixation was more likely to be placed onto the emotional than the neutral scene; (b) recognition sensitivity (A') was generally higher for the emotional than for the neutral scene when the scenes were paired, but there were no differences when presented individually; and (c) the superior sensitivity for emotional scenes survived changes in size, color, and spatial orientation, but not in meaning. The data suggest that semantic analysis of emotional scenes can begin in parafoveal vision in advance of foveal fixation.  相似文献   

7.
Can the visual system extrapolate spatial layout of a scene to new viewpoints after a single view? In the present study, we examined this question by investigating the priming of spatial layout across depth rotations of the same scene (Sanocki &; Epstein, 1997). Participants had to indicate which of two dots superimposed on objects in the target scene appeared closer to them in space. There was as much priming from a prime with a viewpoint that was 10° different from the test image as from a prime that was identical to the target; however, there was no reliable priming from larger differences in viewpoint. These results suggest that a scene’s spatial layout can be extrapolated, but only to a limited extent.  相似文献   

8.
From face recognition studies, it is known that instructions are able to change processing orientation of stimuli, leading to an impairment of recognition performance. The present study examined instructional influences on the visual recognition of dynamic scenes. A global processing orientation without any instruction was assumed to lead to highest recognition performance, whereas instructions focusing participants' attention on certain characteristics of the event should lead to a local processing orientation with an impairment of visual recognition performance as a direct consequence. Since the pattern of results provided evidence for this hypothesis, theoretical contributions were discussed.  相似文献   

9.
Many researchers have used concepts such as “context” or “typicality” in order to explain the influence of knowledge structures on processing visual stimuli. In this article we have tried to establish the influence of typicality on processing objects into environmental scenes, but results have not supported our hypothesis on typicality effects. Results are discussed in the context of the theory of visual processing of environmental scenes, especially some interactions between conditions in experimental procedure. This research was conducted as part of the author’s doctoral thesis requirements at the University of Barcelona.  相似文献   

10.
Boundary extension (BE) refers to the tendency to remember a previously perceived scene with a greater spatial expanse. This phenomenon is described as resulting from different sources of information: external (i.e., visual) and internally driven (i.e., amodal, conceptual, and contextual) information. Although the literature has emphasized the role of top-down expectations to account for layout extrapolation, their effect has rarely been tested experimentally. In this research, we attempted to determine how visual context affects BE, as a function of scene exposure duration (long, short). To induce knowledge about visual context, the memorization phase of the camera distance paradigm was preceded by a preexposure phase, during which each of the to-be-memorized scenes was presented in a larger spatial framework. In an initial experiment, we examined the effect of contextual knowledge with presentation duration, allowing for in-depth processing of visual information during encoding (i.e., 15 s). The results indicated that participants exposed to the preexposure showed decreased BE, and displayed no directional memory error in some conditions. Because the effect of context is known to occur at an early stage of scene perception, in a second experiment we sought to determine whether the effect of a preview occurs during the first fixation on a visual scene. The results indicated that BE seems not to be modulated by this factor at very brief presentation durations. These results are discussed in light of current visual scene representation theories.  相似文献   

11.
This functional MRI study examined how people mentally rotate a 3-dimensional object (an alarm clock) that is retrieved from memory and rotated according to a sequence of auditory instructions. We manipulated the geometric properties of the rotation, such as having successive rotation steps around a single axis versus alternating between 2 axes. The latter condition produced much more activation in several areas. Also, the activation in several areas increased with the number of rotation steps. During successive rotations around a single axis, the activation was similar for rotations in the picture plane and rotations in depth. The parietal (but not extrastriate) activation was similar to mental rotation of a visually presented object. The findings indicate that a large-scale cortical network computes different types of spatial information by dynamically drawing on each of its components to a differential, situation-specific degree.  相似文献   

12.
Everyone has the feeling that perception is usually accurate - we apprehend the layout of the world without significant error, and therefore we can interact with it effectively. Several lines of experimentation, however, show that perceived layout is seldom accurate enough to account for the success of visually guided behaviour. A visual world that has more texture on one side, for example, induces a shift of the body's straight ahead to that side and a mislocalization of a small target to the opposite side. Motor interaction with the target remains accurate, however, as measured by a jab with the finger. Slopes of hills are overestimated, even while matching the slopes of the same hills with the forearm is more accurate. The discrepancy shrinks as the estimated range is reduced, until the two estimates are hardly discrepant for a segment of a slope within arm's reach. From an evolutionary standpoint, the function of perception is not to provide an accurate physical layout of the world, but to inform the planning of future behaviour. Illusions - inaccuracies in perception - are perceived as such only when they can be verified by objective means, such as measuring the slope of a hill, the range of a landmark, or the location of a target. Normally such illusions are not checked and are accepted as reality without contradiction.  相似文献   

13.
The semantic relationship between a prime and a target word has been shown to affect the speed at which the target word is processed. This series of experiments investigated how the semantic priming effect is influenced by the nature of the task performed on the prime word. Subjects were asked to perform either a naming or a letter-search task on the prime word and either a lexical-decision or color-naming task on the target word. When the primes were named, response times for the target words were facilitated in the lexical-decision task and inhibited in the color-naming task. However, these effects were eliminated or reduced to an insignificant level when the primes were searched for letters. We suggest that in order to produce the usual priming effect, the primes have to be processed for meaning rather than probed for constituents.  相似文献   

14.
Adventitiously blinded, congenitally blind, and sighted adults made relative distance judgments in a familiar environment under three sets of instructions—neutral with respect to the metric of comparison, euclidean (straight-line distance between landmarks), and functional (walking distance between landmarks). Analysis of error scores and multidimensional scaling procedures indicated that, although there were no significant differences among groups under functional instructions, all three groups differed from one another under euclidean instructions. Specifically, the sighted group performed best and the congenitally blind group worst, with the adventitiously blind group in between. The results are discussed in the context of the role of visual experience in spatial representation and the application of these methods for evaluating orientation and mobility training for the blind.  相似文献   

15.
Three experiments that adopt an interference paradigm to investigate characteristics of a type of movement causing interference with spatial processing are reported. Experiment 1 illustrates the importance of distinguishing between movement and attention to movement when investigating the movement characteristics of spatial processing. The technique of passive movement is used to minimize attention in the subsequent experiments. Experiment 2 confirms earlier experiments showing that passive movement causes interference in spatial processing. However, it extends the previous findings by demonstrating that passive movement is detrimental to spatial processing only when the movement is to a sequence of locations known in advance by the subjects. Experiment 3 demonstrates that the movement interference cannot be interpreted as a general interference effect but that it is selective for spatial processing. The results of these experiments permit a more precise delineation of the disruptive effects of movement in spatial processing and allow an explicit definition of spatial processing to be put forward.  相似文献   

16.
Completing a representational momentum (RM) task, participants viewed one of three camera motions through a scene and indicated whether test probes were in the same position as they were in the final view of the animation. All camera motions led to RM anticipations in the direction of motion, with larger distortions resulting from rotations than a compound motion of a rotation and a translation. A surprise test of spatial layout, using an aerial map, revealed that the correct map was identified only following aerial views during the RM task. When the RM task displayed field views, including repeated views of multiple object groups, participants were unable to identify the overall spatial layout of the scene. These results suggest that the object–location binding thought to support certain change detection and visual search tasks might be viewpoint dependent.  相似文献   

17.
Participants' fingers were guided to 2 locations on a table for 3 s, then back to the start. They reported distances and angles between the locations by (a) replacing 1 or 2 fingers, (b) translating the contacted configuration, or (c) estimating distance or angle alone. Distance error increased across these conditions. Angular error increased when the angular reference axis was rotated before the response. Replacing 1 finger was impaired by a change in posture from exposure to test. The results suggest a kinesthetic representation is used to replace the fingers, but to estimate distance and angle at new locations, a configural representation is computed. This presentation is oriented within an extrinsic reference frame and maintains shape more accurately than scale.  相似文献   

18.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

19.
We report two experiments designed to investigate the potential use of vibrotactile warning signals to present spatial information to car drivers. Participants performed an attention-demanding rapid serial visual presentation (RSVP) monitoring task. Meanwhile, whenever they felt a vibrotactile stimulus presented on either their front or back, they had to check the front and the rearview mirror for the rapid approach of a car, and brake or accelerate accordingly. We investigated whether speeded responses to potential emergency driving situations could be facilitated by the presentation of spatially-predictive (80% valid; Experiment 1) or spatially-nonpredictive (50% valid; Experiment 2) vibrotactile cues. Participants responded significantly more rapidly following both spatially-predictive and spatially-nonpredictive vibrotactile cues from the same rather than the opposite direction as the critical driving events. These results highlight the potential utility of vibrotactile warning signals in automobile interface design for directing a driver’s visual attention to time-critical events or information.  相似文献   

20.
Four experiments investigated the representation and integration in memory of spatial and nonspatial relations. Subjects learned two-dimensional spatial arrays in which critical pairs of object names were semantically related (Experiment 1), semantically and episodically related (Experiment 2), or just episodically related (Experiments 3a and 3b). Episodic relatedness was established in a paired-associate learning task that preceded array learning. After learning an array, subjects participated in two tasks: item recognition, in which the measure of interest was priming; and distance estimation. Priming in item recognition was sensitive to the Euclidean distance between object names and, for neighbouring locations, to nonspatial relations. Errors in distance estimations varied as a function of distance but were unaffected by nonspatial relations. These and other results indicated that nonspatial relations influenced the probability of encoding spatial relations between locations but did not lead to distorted spatial memories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号