首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The visual world exists all around us, yet this information must be gleaned through a succession of eye fixations in which high visual acuity is limited to the small foveal region of each retina. In spite of these physiological constraints, we experience a richly detailed and continuous visual world. Research on transsaccadic memory, perception, picture memory and imagination of scenes will be reviewed. Converging evidence suggests that the representation of visual scenes is much more schematic and abstract than our immediate experience would indicate. The visual system may have evolved to maximize comprehension of discrete views at the expense of representing unnecessary detail, but through the action of attention it allows the viewer to access detail when the need arises. This capability helps to maintain the 'illusion' of seeing a rich and detailed visual world at every glance.  相似文献   

2.
Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective priming. This occurred even when visual attention was focused on a concurrent verbal task and when foveal gaze-contingent masking prevented overt attention to the primes but only if these had been preexposed and appeared in the left visual field. The preexposure and laterality patterns were different for affective priming and semantic category priming. Affective priming was independent of the nature of the task (i.e., affective or category judgment), whereas semantic priming was not. The authors conclude that affective processing occurs without overt attention--although it is dependent on resources available for covert attention--and that prior experience of the stimulus is required and right-hemisphere dominance is involved.  相似文献   

3.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

4.
Parafoveal semantic processing of emotional visual scenes   总被引:2,自引:0,他引:2  
The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1 degrees or 2.5 degrees of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay). Results indicated that (a) the first fixation was more likely to be placed onto the emotional than the neutral scene; (b) recognition sensitivity (A') was generally higher for the emotional than for the neutral scene when the scenes were paired, but there were no differences when presented individually; and (c) the superior sensitivity for emotional scenes survived changes in size, color, and spatial orientation, but not in meaning. The data suggest that semantic analysis of emotional scenes can begin in parafoveal vision in advance of foveal fixation.  相似文献   

5.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

6.
Despite the complexity and diversity of natural scenes, humans are very fast and accurate at identifying basic-level scene categories. In this paper we develop a new technique (based on Bubbles, Gosselin & Schyns, 2001a; Schyns, Bonnar, & Gosselin, 2002) to determine some of the information requirements of basic-level scene categorizations. Using 2400 scenes from an established scene database (Oliva & Torralba, 2001), the algorithm randomly samples the Fourier coefficients of the phase spectrum. Sampled Fourier coefficients retain their original phase while the phase of nonsampled coefficients is replaced with that of white noise. Observers categorized the stimuli into 8 basic-level categories. The location of the sampled Fourier coefficients leading to correct categorizations was recorded per trial. Statistical analyses revealed the major scales and orientations of the phase spectrum that observers used to distinguish scene categories.  相似文献   

7.
The present study examined the saliency of size, movement, and human content variables in visual selective attention. Ss named stimuli present in motion pictures of real world scenes or in animated cartoon controls during a 15-sec. exposure period. Regardless of the type of presentation that they saw, Ss tended to name large and/or moving stimuli more often than small and/or nonmoving stimuli. Also, small human stimuli were named more frequently than small nonhuman stimuli, while there were no differences between the frequencies with which large human and nonhuman stimuli were named. The order in which Ss named stimuli was not related to either the size, movement, or human content variables. Results are discussed in terms of the generalizability of the results of previous studies to conditions simulating the real world.  相似文献   

8.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

9.
We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing.  相似文献   

10.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation.  相似文献   

11.
It is well known that the distribution of spatial content with respect to spatial scale in real-world scenes falls in accordance with a 1/ f α relation. Equally well known is the tendency for an orientation bias in scene content with the predominant bias in content at the horizontal and vertical orientations. This has led to the suggestion of a relationship in which the mechanisms of the human visual system are optimized for processing such regularities. Here we review current literature concerning the measurement of these regularities (via Fourier analysis) of natural scenes in the context of other work that has psychophysically assessed the extent to which visual perception exploits such regularities of spatial content. In addition, 2 psychophysical experiments are presented that extend this literature and argue for the importance of these regularities in perceiving orientation in real-world visual stimuli.  相似文献   

12.
Object identification in context: the visual processing of natural scenes.   总被引:1,自引:0,他引:1  
When we view a natural visual scene, we seem able to determine effortlessly the scene's semantic category, constituent objects, and spatial relations. How do we accomplish this visual-cognitive feat? The commonly held explanation is known as the schema hypothesis, according to which a visual scene is rapidly identified as a member of a semantic category, and predictions generated from the scene category are then used to aid subsequent object identification. In this paper I will first outline and offer a critique of the evidence that has been taken to support the schema hypothesis. I will then offer an alternative framework for understanding scene processing, which I will call the local-processing hypothesis. This hypothesis assumes a modular, informationally-encapsulated architecture, and explicitly includes the role of covert visual attention in scene processing.  相似文献   

13.
14.
15.
The authors investigated how human adults encode and remember parts of multielement scenes composed of recursively embedded visual shape combinations. The authors found that shape combinations that are parts of larger configurations are less well remembered than shape combinations of the same kind that are not embedded. Combined with basic mechanisms of statistical learning, this embeddedness constraint enables the development of complex new features for acquiring internal representations efficiently without being computationally intractable. The resulting representations also encode parts and wholes by chunking the visual input into components according to the statistical coherence of their constituents. These results suggest that a bootstrapping approach of constrained statistical learning offers a unified framework for investigating the formation of different internal representations in pattern and scene perception.  相似文献   

16.
Infants respond preferentially to faces and face‐like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3‐ to 12‐months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants rapidly detected and oriented to faces in scenes even when they were not visually salient. Although a clear developmental improvement was observed in face detection and interest, all infants displayed sensitivity to the presence of a person in a scene, by displaying eye movements that differed quantifiably across a range of measures when viewing scenes that either did or did not contain a person. We argue that infant's face detection capabilities are ostensibly “better” with naturalistic stimuli and artificial array presentations used in previous studies have underestimated performance.  相似文献   

17.
Single items such as objects, letters or words are often presented in the right or left visual field to examine hemispheric differences in cognitive processing. However, in everyday life, such items appear within a visual context or scene that affects how they are represented and selected for attention. Here we examine processing asymmetries for a visual target within a frame of other elements (scene). We are especially interested in whether the allocation of visual attention affects the asymmetries, and in whether attention-related asymmetries occur in scenes oriented out of alignment with the viewer. In Experiment 1, visual field asymmetries were affected by the validity of a spatial precue in an upright frame. In Experiment 2, the same pattern of asymmetries occurred within frames rotated 90 degrees on the screen. In Experiment 3, additional sources of the spatial asymmetries were explored. We conclude that several left/right processing asymmetries, including some associated with the deployment of spatial attention, can be organized within scenes, in the absence of differential direct access to the two hemispheres.  相似文献   

18.
The initial categorization of complex visual scenes is a very rapid process. Here we find no differences in performance for upright and inverted images arguing for a neural mechanism that can function without involving high-level image orientation dependent identification processes. Using an adaptation paradigm we are able to demonstrate that artificial images composed to mimic the orientation distribution of either natural or man-made scenes systematically shift the judgement of human observers. This suggests a highly efficient feedforward system that makes use of “low-level” image features yet supports the rapid extraction of essential information for the categorization of complex visual scenes.  相似文献   

19.
Can observers determine the gist of a natural scene in a purely feedforward manner, or does this process require deliberation and feedback? Observers can recognise images that are presented for very brief periods of time before being masked. It is unclear whether this recognition process occurs in a purely feedforward manner or whether feedback from higher cortical areas to lower cortical areas is necessary. The current study revealed that the minimum presentation time required to identify or to determine the gist of a natural scene was no different from that required to determine the orientation or colour of an isolated line. Conversely, a visual task that would be expected to necessitate feedback (determining whether an image contained exactly six lines) required a significantly greater minimum presentation time. Assuming that the orientation or colour of an isolated line can be determined in a purely feedforward manner, these results indicate that the identification and the determination of the gist of a natural scene can also be performed in a purely feedforward manner. These results challenge a number of theories of visual recognition that require feedback.  相似文献   

20.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号