首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

2.
Our research has previously shown that scene categories can be predicted from observers’ eye movements when they view photographs of real-world scenes. The time course of category predictions reveals the differential influences of bottom-up and top-down information. Here we used these known differences to determine to what extent image features at different representational levels contribute toward guiding gaze in a category-specific manner. Participants viewed grayscale photographs and line drawings of real-world scenes while their gaze was tracked. Scene categories could be predicted from fixation density at all times over a 2-s time course in both photographs and line drawings. We replicated the shape of the prediction curve found previously, with an initial steep decrease in prediction accuracy from 300 to 500 ms, representing the contribution of bottom-up information, followed by a steady increase, representing top-down knowledge of category-specific information. We then computed the low-level features (luminance contrasts and orientation statistics), mid-level features (local symmetry and contour junctions), and Deep Gaze II output from the images, and used that information as a reference in our category predictions in order to assess their respective contributions to category-specific guidance of gaze. We observed that, as expected, low-level salience contributes mostly to the initial bottom-up peak of gaze guidance. Conversely, the mid-level features that describe scene structure (i.e., local symmetry and junctions) split their contributions between bottom-up and top-down attentional guidance, with symmetry contributing to both bottom-up and top-down guidance, while junctions play a more prominent role in the top-down guidance of gaze.  相似文献   

3.
Emotional faces and scenes carry a wealth of overlapping and distinct perceptual information. Despite widespread use in the investigation of emotional perception, expressive face and evocative scene stimuli are rarely assessed in the same experiment. Here, we evaluated self-reports of arousal and pleasantness, as well as early and late event-related potentials (e.g., N170, early posterior negativity [EPN], late positive potential [LPP]) as subjects viewed neutral and emotional faces and scenes, including contents representing anger, fear, and joy. Results demonstrate that emotional scenes were rated as more evocative than emotional faces, as only scenes produced elevated self-reports of arousal. In addition, viewing scenes resulted in more extreme ratings of pleasantness (and unpleasantness) than did faces. EEG results indicate that both expressive faces and emotional scenes evoke enhanced negativity in the N170 component, while the EPN and LPP components show significantly enhanced modulation only by scene, relative to face stimuli. These data suggest that viewing emotional scenes results in a more pronounced emotional experience that is associated with reliable modulation of visual event-related potentials that are implicated in emotional circuits in the brain.  相似文献   

4.
Research using change detection paradigms has demonstrated that only limited scene information remains available for conscious report following initial inspection of a scene. Previous researchers have found higher change identification rates for deletions of parts of objects in line drawings of scenes than additions. Other researchers, however, have found an asymmetry in the opposite direction for addition/deletion of whole objects in line drawings of scenes. Experiment 1 investigated subjects' accuracy in detecting and identifying changes made to successive views of high quality photographs of naturalistic scenes that involved the addition and deletion of objects, colour changes to objects, and changes to the spatial location of objects. Identification accuracy for deletions from scenes was highest, with lower identification rates for object additions and colour changes, and the lowest rates for identification of location changes. Data further suggested that change identification rates for the presence/absence of objects were a function of the number of identical items present in the scene. Experiment 2 examined this possibility further, and also investigated whether the higher identification rates for deletions found in Experiment 1 were found for changes involving whole objects or parts of objects. Results showed higher identification rates for deletions, but only where a unique object was deleted from a scene. The presence of an identical object in the scene abolished this deletion identification advantage. Results further showed that the deletion/addition asymmetry occurs both when the objects are parts of a larger object and when they are entire objects in the scene.  相似文献   

5.
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or “free-floating” objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.  相似文献   

6.
7.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

8.
场景知觉及其研究范式   总被引:1,自引:0,他引:1  
场景知觉关注的是人如何知觉和加工复杂的真实环境信息。场景包括物体和背景两个重要的组成部分,根据复杂性与真实性程度,场景刺激材料可以分为三种不同的类型。已有研究主要从自上而下和自下而上两种方式来解释场景知觉中信息的提取和加工,也有研究试图从二者交互的角度来进行解释。此外,基于不同的实验目的和技术,研究者分别采用了眼动、背景提示、物体觉察、变化觉察和点线索追随几种不同的研究范式来探讨场景信息的知觉问题。场景知觉研究在场景的定义、不同范式间的整合、研究内部效度和不同加工阶段的加工方式四个方面还需要进一步深入和探讨  相似文献   

9.
What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.  相似文献   

10.
Participants' eye movements were monitored in two scene viewing experiments that manipulated the task-relevance of scene stimuli and their availability for extrafoveal processing. In both experiments, participants viewed arrays containing eight scenes drawn from two categories. The arrays of scenes were either viewed freely (Free Viewing) or in a gaze-contingent viewing mode where extrafoveal preview of the scenes was restricted (No Preview). In Experiment 1a, participants memorized the scenes from one category that was designated as relevant, and in Experiment 1b, participants chose their preferred scene from within the relevant category. We examined first fixations on scenes from the relevant category compared to the irrelevant category (Experiments 1a and 1b), and those on the chosen scene compared to other scenes not chosen within the relevant category (Experiment 1b). A survival analysis was used to estimate the first discernible influence of the task-relevance on the distribution of first-fixation durations. In the free viewing condition in Experiment 1a, the influence of task relevance occurred as early as 81 ms from the start of fixation. In contrast, the corresponding value in the no preview condition was 254 ms, demonstrating the crucial role of extrafoveal processing in enabling direct control of fixation durations in scene viewing. First fixation durations were also influenced by whether or not the scene was eventually chosen (Experiment 1b), but this effect occurred later and affected fewer fixations than the effect of scene category, indicating that the time course of scene processing is an important variable mediating direct control of fixation durations.  相似文献   

11.
Current models of visual perception suggest that, during scene categorization, low spatial frequencies (LSF) are rapidly processed and activate plausible interpretations of visual input. This coarse analysis would be used to guide subsequent processing of high spatial frequencies (HSF). The present study aimed to further examine how information from LSF and HSF interact and influence each other during scene categorization. In a first experimental session, participants had to categorize LSF and HSF filtered scenes belonging to two different semantic categories (artificial vs. natural). In a second experimental session, we used hybrid scenes as stimuli made by combining LSF and HSF from two different scenes which were semantically similar or dissimilar. Half of the participants categorized LSF scenes in hybrids, and the other half categorized HSF scenes in hybrids. Stimuli were presented for 30 or 100?ms. Session 1 results showed better performance for LSF than HSF scene categorization. Session 2 scene categorization was faster when participants attended and categorized LSF than HSF scene in hybrids. The semantic interference of a semantically dissimilar HSF scene on LSF scene categorization was greater than the semantic interference of a semantically dissimilar LSF scene on HSF scene categorization, irrespective of exposure duration. These results suggest a LSF advantage for scene categorization, and highlight the prominent role of HSF information when there is uncertainty about the visual stimulus, in order to disentangle between alternative interpretations.  相似文献   

12.
Changes in perception during space missions are usually attributed to microgravity. However, additional factors, such as spatial confinement, may contribute to changes in perception. We tested changes in scene perception using a boundary extension (BE) paradigm during a 105-day Earth-based space-simulation study. In addition to the close-up/wide-angle views used in BE, we presented two types of scenes based on the distance from the observer (proximal/distant scenes). In crew members (n = 6), we found that BE partly increased over time, but the size of BE error did not change in the control group (n = 22). We propose that this effect is caused by an increasing BE effect in stimuli that depict distant scenes and is related to spatial confinement. The results might be important for other situations of spatial confinement with restricted visual depth (e.g., submarine crew, patients confined to a bed). Generally, we found a larger BE effect in proximal scenes compared with the distant scenes. We also demonstrated that with no feedback, subjects preserve the level of the BE effect during repeated measurements.  相似文献   

13.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

14.
Is boundary extension (false memory beyond the edges of the view) determined solely by the schematic structure of the view or does the quality of the pictorial information impact this error? To examine this, colour photographs or line-drawings of 12 multi-object scenes (Experiment 1: N=64) and 16 single-object scenes (Experiment 2: N=64) were presented for 14 s each. At test, the same pictures were each rated as being the “same”, “closer-up”, or “farther away” (five-point scale). Although the layout, the scope of the view, the distance of the main objects to the edges, the background space and the gist of the scenes were held constant, line drawings yielded greater boundary extension than did their photographic counterparts for multi-object (Experiment 1) and single-object (Experiment 2) scenes. Results are discussed in the context of the multisource model and its implications for the study of scene perception and memory.  相似文献   

15.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

16.
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test the extent to which scene and object selective areas are sensitive to perceived distance information independently from their category-selectivity and retinotopic location. We conducted two studies that used a distance illusion (i.e., the Ponzo lines) and showed that scene regions (the parahippocampal place area, PPA, and transverse occipital sulcus, TOS) are biased toward perceived distal stimuli, whereas the lateral occipital (LO) object region is biased toward perceived proximal stimuli. These results suggest that the ventral visual cortex plays a role in representing distance information, extending recent findings on the sensitivity of these regions to location information. More broadly, our findings imply that distance information is inherent to object recognition.  相似文献   

17.
Using naturalistic scenes, we recently demonstrated that confidence?Caccuracy relations differ depending on whether recognition responses are based on memory for a specific feature or instead on general familiarity: When confidence is controlled for, accuracy is higher for familiarity-based than for feature-based responses. In the present experiment, we show that these results generalize to face recognition. Subjects studied photographs of scenes and faces presented for varying brief durations and received a recognition test on which they (1) indicated whether each picture was old or new, (2) rated their confidence in their response, and (3) indicated whether their response was based on memory for a feature or on general familiarity. For both stimulus types, subjects were more accurate and more confident for their feature-based than for their familiarity-based responses. However, when confidence was held constant, accuracy was higher for familiarity-based than for feature-based responses. These results demonstrate an important similarity between face and scene recognition and show that for both types of stimuli, confidence and accuracy are based on different information.  相似文献   

18.
Knowing where people look on a face provides an objective insight into the information entering the visual system and into cognitive processes involved in face perception. In the present study, we recorded eye movements of human participants while they compared two faces presented simultaneously. Observers’ viewing behavior and performance was examined in two tasks of parametrically varying difficulty, using two types of face stimuli (sex morphs and identity morphs). The frequency, duration, and temporal sequence of fixations on previously defined areas of interest in the faces were analyzed. As was expected, viewing behavior and performance varied with difficulty. Interestingly, observers compared predominantly the inner halves of the face stimuli—a result inconsistent with the general left-hemiface bias reported for single faces. Furthermore, fixation patterns and performance differed between tasks, independently of stimulus type. Moreover, we found differences in male and female participants’ viewing behaviors, but only when the sex of the face stimuli was task relevant.  相似文献   

19.
Papathomas TV  Bono LM 《Perception》2004,33(9):1129-1138
Earlier psychophysical and physiological studies, obtained mostly with two-dimensional (2-D) stimuli, provided evidence for the hypothesis that the processing of faces differs from that of scenes. We report on our experiments, employing realistic three-dimensional (3-D) stimuli of a hollow mask and a scene, that offer further evidence for this hypothesis. The stimuli used for both faces and scenes were bistable, namely they could elicit either the veridical or an illusory volumetric percept. Our results indicate that the illusion is weakened when the stimuli are inverted, suggesting the involvement of top down processes. This inversion effect is statistically significant for the facial stimulus, but the trend did not reach statistical significance for the scene stimulus. These results support the hypothesis that configural processing is stronger for the 3-D perception of faces than it is for scenes, and extend the conclusions of earlier studies on 2-D stimuli.  相似文献   

20.
The effect of varying information for overall depth in a simulated 3-D scene on the perceived layout of objects in the scene was investigated in two experiments. Subjects were presented with displays simulating textured surfaces receded in depth. Pairs of markers were positioned at equal intervals within the scenes. The subject's task was to judge the depth between the intervals. Overall scene depth was varied by viewing through either a collimating lens or a glass disk. Judged depth for equal depth intervals decreased with increasing distance of the interval from the front of the scene. Judged depth was greater for collimated than for non-collimated viewing. Interestingly, collimated viewing resulted in a uniform rescaling of the perceived depth intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号