首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

2.
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category—that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.  相似文献   

3.
Searching for items in one’s environment often includes considerable reliance on semantic knowledge. The present study examines the importance of semantic information in visual and memory search, especially with respect to whether the items reside in long-term or working memory. In Experiment 1, participants engaged in hybrid visual memory search for items that were either highly familiar or novel. Importantly, the relatively large number of targets in this hybrid search task necessitated that targets be stored in some form of long-term memory. We found that search for familiar objects was more efficient than search for novel objects. In Experiment 2, we investigated search for familiar versus novel objects when the number of targets was low enough to be stored in working memory. We also manipulated how often participants in Experiment 2 were required to update their target (every trial vs. every block) in order to control for target templates that were stored in long-term memory as a result of repeated exposure over trials. We found no differences in search efficiency for familiar versus novel objects when templates were stored in working memory. Our results suggest that while semantic information may provide additional individuating features that are useful for object recognition in hybrid search, this information could be irrelevant or even distracting when searching for targets stored in working memory.  相似文献   

4.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

5.
In visual search, observers try to find known target objects among distractors in visual scenes where the location of the targets is uncertain. This review article discusses the attentional processes that are active during search and their neural basis. Four successive phases of visual search are described. During the initial preparatory phase, a representation of the current search goal is activated. Once visual input has arrived, information about the presence of target-matching features is accumulated in parallel across the visual field (guidance). This information is then used to allocate spatial attention to particular objects (selection), before representations of selected objects are activated in visual working memory (recognition). These four phases of attentional control in visual search are characterized both at the cognitive level and at the neural implementation level. It will become clear that search is a continuous process that unfolds in real time. Selective attention in visual search is described as the gradual emergence of spatially specific and temporally sustained biases for representations of task-relevant visual objects in cortical maps.  相似文献   

6.
If priming effects serve an adaptive function, they have to be both robust and flexible. In four experiments, we demonstrated regular evaluative-priming effects for relatively long stimulus-onset asynchronies, which can, however, be eliminated or reversed strategically. When participants responded to both primes and targets, rather than only to targets, the standard congruity effect disappeared. In Experiments 1a–1c, this result was regularly obtained, independently of the prime response (valence or gender classification) and the response mode (pronunciation or keystroke). In Experiment 2, we showed that once the default congruity effect was eliminated, strategic-priming effects reflected the statistical contingency between prime valence and target valence. Positive contingencies produced congruity, whereas negative contingencies produced equally strong incongruity effects. Altogether, these findings are consistent with an adaptive-cognitive perspective, which highlights the role of flexible strategic processes in working memory as opposed to fixed structures in semantic long-term memory or in the sensorimotor system.  相似文献   

7.
We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.  相似文献   

8.
Research has demonstrated that objects in natural scenes are categorized without the deployment of attention. However, in these types of studies, participants were required to directly respond to peripherally presented scenes, which might lead some participants to move their attention. If this is the case, the above conclusion concerning natural scenes may not be valid. We investigated this issue by using a negative priming (NP) paradigm in which participants did not directly respond to peripheral stimuli. Our results showed NP effect from ignored stimuli in natural scene categorization, but neither in letter discrimination (Experiment 1) nor in line-drawing categorization (Experiment 2). In addition, NP effects were observed even when probe stimuli were words (Experiments 3A and 3B). These findings suggest that people can categorize objects in natural scenes with minimal attention, that this process is specific to natural scenes, and that it is based on the semantic information of the images.  相似文献   

9.
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.  相似文献   

10.
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.  相似文献   

11.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

12.
The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants’ performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.  相似文献   

13.
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or “free-floating” objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.  相似文献   

14.
Statistical learning has been widely proposed as a mechanism by which observers learn to decompose complex sensory scenes. To determine how robust statistical learning is, we investigated the impact of attention and perceptual grouping on statistical learning of visual shapes. Observers were presented with stimuli containing two shapes that were either connected by a bar or unconnected. When observers were required to attend to both locations at which shapes were presented, the degree of statistical learning was unaffected by whether the shapes were connected or not. However, when observers were required to attend to just one of the shapes' locations, statistical learning was observed only when the shapes were connected. These results demonstrate that visual statistical learning is not just a passive process. It can be modulated by both attention and connectedness, and in natural scenes these factors may constrain the role of stimulus statistics in learning.  相似文献   

15.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

16.
Viewpoint-dependence is a well-known phenomenon in which participants' spatial memory is better for previously experienced points of view than for novel ones. In the current study, partial-scene-recognition was used to examine the effect of coincident orientation of all the objects on viewpoint-dependence in spatial memory. When objects in scenes had no clear orientations (e.g., balls), participants' recognition of experienced directions was better than that of novel ones, indicating that there was viewpoint-dependence. However, when the objects in scenes were toy bears with clear orientations, the coincident orientation of objects (315 degrees), which was not experienced, shared the advantage of the experienced direction (0 degrees), and participants were equally likely to choose either direction when reconstructing the spatial representation in memory. These findings suggest that coincident orientation of objects may affect egocentric representations in spatial memory.  相似文献   

17.
When people hold several objects (such as digits or words) in working memory and select one for processing, switching to a new object takes longer than selecting the same object as that on the preceding processing step. Similarly, selecting a new task incurs task- switching costs. This work investigates the selection of objects and of tasks in working memory using a combination of object-switching and task-switching paradigms. Participants used spatial cues to select one digit held in working memory and colour cues to select one task (addition or subtraction) to apply to it. Across four experiments the mapping between objects and their cues and the mapping between tasks and their cues were varied orthogonally. When mappings varied from trial to trial for both objects and tasks, switch costs for objects and tasks were additive, as predicted by sequential selection or resource sharing. When at least one mapping was constant across trials, allowing learning of long-term associations, switch costs were underadditive, as predicted by partially parallel selection. The number of objects in working memory affected object-switch costs but not task-switch costs, counter to the notion of a general resource of executive attention.  相似文献   

18.
康廷虎  白学军 《心理科学》2013,36(3):558-569
采用眼动研究范式,通过两个实验考察靶刺激变换与情景信息属性对情景再认的影响。实验一结果显示,靶刺激变换对情景再认、靶刺激所属兴趣区的凝视时间均有显著影响,这表明在情景再认过程中,观察者会有意识地搜索靶刺激,靶刺激具有诊断效应;实验二应用了知觉信息与语义信息重合和分离两种情景材料。结果显示,观察者对语义信息的首次注视次数显著多于知觉信息;观察者对知觉信息和语义信息分离条件下语义信息的首次注视时间显著长于重合条件下。这一结果提示,在情景识别过程中,语义信息具有注意优先性,但其优先性会受到知觉信息启动的干扰。  相似文献   

19.
In two experiments, the nature of the relation between attention available at learning and subsequent automatic and controlled influences of memory was explored. Participants studied word lists in full and divided encoding conditions. Memory for the word lists was then tested with a perceptually driven task (stem completion) in Experiment 1 and with a conceptually driven task (category association) in Experiment 2. For recall cued with word stems, automatic influences of memory derived using the process-dissociation procedure remained invariant across a manipulation of attention that substantially reduced conscious recollection for the learning episode. In contrast, for recall cued with category names, dividing attention at learning significantly reduced the parameter estimates representing both controlled and automatic memory processes. These findings were similar to those obtained using indirect test instructions. The results suggest that, in contrast to perceptual priming, conceptual priming may be enhanced by semantic processing, and this effect is not an artifact of contamination from conscious retrieval processes.  相似文献   

20.
When viewing real-world scenes composed of a myriad of objects, detecting changes can be quite difficult, especially when transients are masked. In general, changes are noticed more quickly and accurately if they occur at the currently (or a recently) attended location. Here, we examine the effects of explicit and implicit semantic cues on the guidance of attention in a change detection task. Participants first attempted to read aloud a briefly presented prime word, then looked for a difference between two alternating versions of a real-world scene. Helpful primes named the object that changed, while misdirecting primes named another (unchanging) object in the picture. Robust effects were found for both explicit and implicit priming conditions, with helpful primes yielding faster change detection times than misdirecting or neutral primes. This demonstrates that observers are able to use higher order semantic information as a cue to guide attention within a natural scene, even when the semantic information is presented outside of explicit awareness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号