首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When two identical visual items are presented in rapid succession, people often fail to report the second instance when trying to recall both (e.g., Kanwisher, 1987). We investigated whether this temporal processing deficit is modulated by the spatial separation between the repeated stimuli within both audition and vision. In Experiment 1, lists of one to three digits were rapidly presented from loudspeaker cones arranged in a semicircle around the participant. Recall accuracy was lower when repeated digits were presented from different positions rather than from the same position, as compared to unrepeated control pairs, demonstrating that auditory repetition deafness (RD) is modulated by the spatial displacement between repeated items. A similar spatial modulation of visual repetition blindness (RB) was reported when pairs of masked letters were presented visually from either the same or different positions arranged on a semicircle around fixation (Experiment 2). These results cannot easily be accounted for by the token individuation hypothesis of RB (Kanwisher, 1987; Park & Kanwisher, 1994) and instead support a recognition failure account (Hochhaus & Johnston, 1996; Luo & Caramazza, 1995, 1996).  相似文献   

2.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

3.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

4.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

5.
武悦  王爱平 《心理科学》2015,(2):296-302
本研究采用不同部件特点的汉字材料考察在汉字部件水平的重复知盲效应。实验1探究在一组刺激序列中,当两个关键刺激R1和R2的相同部件在不同位置时对重复知盲效应的影响。实验1结果表明,不同条件的重复知盲效应存在显著差异,但其中两个关键字中相同部件的位置相同时与部件位置不同时之间的重复知盲效应无显著差异,这表明部件位置对重复知盲现象无显著影响。实验2探究当关键刺激R1和R2为部件包含关系时的重复知盲效应。实验2结果表明,当一个关键刺激为独体字,且是另一个关键刺激的部件时存在重复知盲效应,其效应的大小会受到具有包含关系的两个关键字(合体字和独体字)在刺激序列中出现顺序的影响,当第二个关键刺激为独体字时的重复知盲效应要显著强于第二个关键刺激为合体字条件。研究结论是不同的汉字部件特点会影响重复之盲效应,这表明重复知盲现象很可能发生在在线的知觉加工过程。  相似文献   

6.
We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global perspective, close-up views of sub-regions from the same scene category, and single objects highly diagnostic of that scene category. Faces were included as a control condition. Activation in parahippocampal place area was greatest for full scene views that explicitly included the 3D spatial structure of the environment, with progressively less activation for close-up views of local scene regions containing diagnostic objects but less explicitly depicting 3D scene geometry, followed by single scene-diagnostic objects. Faces did not activate parahippocampal place area. In contrast, activation in retrosplenial cortex was greatest for full scene views, and did not differ among close-up views, diagnostic objects, and faces. The results showed that parahippocampal place area responds in a graded fashion as images become more completely scene-like and include more explicit 3D structure, whereas retrosplenial cortex responds in a step-wise manner to the presence of a complete scene. These results suggest scene processing areas are particularly sensitive to the 3D geometric structure that distinguishes scenes from other types of complex and meaningful visual stimuli.  相似文献   

7.
Background: Neuroanatomical evidence suggests that the human brain has dedicated pathways to rapidly process threatening stimuli. This processing bias for threat was examined using the repetition blindness (RB) paradigm. RB (i.e., failure to report the second instance of an identical stimulus rapidly following the first) has been established for words, objects and faces but not, to date, facial expressions. Methods: 78 (Study 1) and 62 (Study 2) participants identified repeated and different, threatening and non-threatening emotional facial expressions in rapid serial visual presentation (RSVP) streams. Results: In Study 1, repeated facial expressions produced more RB than different expressions. RB was attenuated for threatening expressions. In Study 2, attenuation of RB for threatening expressions was replicated. Additionally, semantically related but non-identical threatening expressions reduced RB relative to non-threatening stimuli. Conclusions: These findings suggest that the threat bias is apparent in the temporal processing of facial expressions, and expands the RB paradigm by demonstrating that identical facial expressions are susceptible to the effect.  相似文献   

8.
Background: Neuroanatomical evidence suggests that the human brain has dedicated pathways to rapidly process threatening stimuli. This processing bias for threat was examined using the repetition blindness (RB) paradigm. RB (i.e., failure to report the second instance of an identical stimulus rapidly following the first) has been established for words, objects and faces but not, to date, facial expressions. Methods: 78 (Study 1) and 62 (Study 2) participants identified repeated and different, threatening and non-threatening emotional facial expressions in rapid serial visual presentation (RSVP) streams. Results: In Study 1, repeated facial expressions produced more RB than different expressions. RB was attenuated for threatening expressions. In Study 2, attenuation of RB for threatening expressions was replicated. Additionally, semantically related but non-identical threatening expressions reduced RB relative to non-threatening stimuli. Conclusions: These findings suggest that the threat bias is apparent in the temporal processing of facial expressions, and expands the RB paradigm by demonstrating that identical facial expressions are susceptible to the effect.  相似文献   

9.
Subjects reported either the colors or shapes of two simultaneous masked letters. Our first study found that they were less accurate when the reported features were identical ("repetition blindness," or RB), while repetition along the unreported dimension had no effect. Three follow-up studies confirmed that when the same dimension was judged (overtly or covertly) for both stimuli, performance was only affected by repetition along that dimension. However, when different dimensions were judged for the two stimuli, performance was affected by repetition on both dimensions. These findings support new conclusions about both RB and visual attention. First, RB depends critically on visual attention, rather than simply on the stimulus presented or the overt response required. Second, while attention can be restricted to a single visual dimension, this is efficient only when the same dimension is selected for both objects. Selecting the color of one object and the shape of another simultaneous object results in both dimensions being accessed for both objects.  相似文献   

10.
Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects of scene space (such as navigability or mean depth). In Experiment 1, we obtained ground truth rankings on global properties for use in Experiments 2-4. To what extent do human observers use global property information when rapidly categorizing natural scenes? In Experiment 2, we found that global property resemblance was a strong predictor of both false alarm rates and reaction times in a rapid scene categorization experiment. To what extent is global property information alone a sufficient predictor of rapid natural scene categorization? In Experiment 3, we found that the performance of a classifier representing only these properties is indistinguishable from human performance in a rapid scene categorization task in terms of both accuracy and false alarms. To what extent is this high predictability unique to a global property representation? In Experiment 4, we compared two models that represent scene object information to human categorization performance and found that these models had lower fidelity at representing the patterns of performance than the global property model. These results provide support for the hypothesis that rapid categorization of natural scenes may not be mediated primarily though objects and parts, but also through global properties of structure and affordance.  相似文献   

11.
Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.  相似文献   

12.
The repetition blindness effect (RB) occurs when individuals are unable to recall a repeated word relative to a nonrepeated word in a sentence or string of words presented in a rapid serial visual presentation task. This effect was explored across languages (English and Spanish) in an attempt to provide evidence for RB at a conceptual level using noncognate translation equivalents (e.g.,nephew-sobrino). In the first experiment, RB was found when a word was repeated in an English sentence but not when the two repetitions were in different languages. In the second experiment, RB was found for identical repetitions in Spanish and in English using word lists. However, the crosslanguage condition produced significant facilitation in recall, suggesting that although conceptual processing had taken place, semantic overlap was not sufficient to produce RB. The results confirm Kanwisher’s (1987) token individuation hypothesis in the case of translation equivalents.  相似文献   

13.
Current models of visual perception suggest that, during scene categorization, low spatial frequencies (LSF) are rapidly processed and activate plausible interpretations of visual input. This coarse analysis would be used to guide subsequent processing of high spatial frequencies (HSF). The present study aimed to further examine how information from LSF and HSF interact and influence each other during scene categorization. In a first experimental session, participants had to categorize LSF and HSF filtered scenes belonging to two different semantic categories (artificial vs. natural). In a second experimental session, we used hybrid scenes as stimuli made by combining LSF and HSF from two different scenes which were semantically similar or dissimilar. Half of the participants categorized LSF scenes in hybrids, and the other half categorized HSF scenes in hybrids. Stimuli were presented for 30 or 100?ms. Session 1 results showed better performance for LSF than HSF scene categorization. Session 2 scene categorization was faster when participants attended and categorized LSF than HSF scene in hybrids. The semantic interference of a semantically dissimilar HSF scene on LSF scene categorization was greater than the semantic interference of a semantically dissimilar LSF scene on HSF scene categorization, irrespective of exposure duration. These results suggest a LSF advantage for scene categorization, and highlight the prominent role of HSF information when there is uncertainty about the visual stimulus, in order to disentangle between alternative interpretations.  相似文献   

14.
The initial categorization of complex visual scenes is a very rapid process. Here we find no differences in performance for upright and inverted images arguing for a neural mechanism that can function without involving high-level image orientation dependent identification processes. Using an adaptation paradigm we are able to demonstrate that artificial images composed to mimic the orientation distribution of either natural or man-made scenes systematically shift the judgement of human observers. This suggests a highly efficient feedforward system that makes use of “low-level” image features yet supports the rapid extraction of essential information for the categorization of complex visual scenes.  相似文献   

15.
Perceiving illumination inconsistencies in scenes   总被引:1,自引:0,他引:1  
Ostrovsky Y  Cavanagh P  Sinha P 《Perception》2005,34(11):1301-1314
The human visual system is adept at detecting and encoding statistical regularities in its spatiotemporal environment. Here, we report an unexpected failure of this ability in the context of perceiving inconsistencies in illumination distributions across a scene. Prior work with arrays of objects all having uniform reflectance has shown that one inconsistently illuminated target can 'pop out' among a field of consistently illuminated objects (eg Enns and Rensink, 1990 Science 247 721 723; Sun and Perona, 1997 Perception 26 519-529). In these studies, the luminance pattern of the odd target could be interpreted as arising from either an inconsistent illumination or inconsistent pigmentation of the target. Either cue might explain the rapid detection. In contrast, we find that once the geometrical regularity of the previous displays is removed, the visual system is remarkably insensitive to illumination inconsistencies, both in experimental stimuli and in altered images of real scenes. Whether the target is interpreted as oddly illuminated or oddly pigmented, it is very difficult to find if the only cue is deviation from the regularity of illumination or reflectance. Our results allow us to draw inferences about how the visual system encodes illumination distributions across scenes. Specifically, they suggest that the visual system does not verify the global consistency of locally derived estimates of illumination direction.  相似文献   

16.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

17.
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.  相似文献   

18.
Parafoveal semantic processing of emotional visual scenes   总被引:2,自引:0,他引:2  
The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1 degrees or 2.5 degrees of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay). Results indicated that (a) the first fixation was more likely to be placed onto the emotional than the neutral scene; (b) recognition sensitivity (A') was generally higher for the emotional than for the neutral scene when the scenes were paired, but there were no differences when presented individually; and (c) the superior sensitivity for emotional scenes survived changes in size, color, and spatial orientation, but not in meaning. The data suggest that semantic analysis of emotional scenes can begin in parafoveal vision in advance of foveal fixation.  相似文献   

19.
Harris IM  Dux PE 《Cognition》2005,95(1):73-93
The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation. This failure is usually interpreted as a difficulty in assigning two separate episodic tokens to the same visual type. Thus, RB can provide useful information about which representations are treated as the same by the visual system. Two experiments tested whether RB occurs for repeated objects that were either in identical orientations, or differed by 30, 60, 90, or 180 degrees . Significant RB was found for all orientation differences, consistent with the existence of orientation-invariant object representations. However, under some circumstances, RB was reduced or even eliminated when the repeated object was rotated by 180 degrees , suggesting easier individuation of the repeated objects in this case. A third experiment confirmed that the upside-down orientation is processed more easily than other rotated orientations. The results indicate that, although object identity can be determined independently of orientation, orientation plays an important role in establishing distinct episodic representations of a repeated object, thus enabling one to report them as separate events.  相似文献   

20.
Theattentional blink (AB) andrepetition blindness (RB) phenomena refer to subjects’ impaired ability to detect the second of two different (AB) or identical (RB) target stimuli in a rapid serial visual presentation stream if they appear within 500 msec of one another. Despite the fact that the AB reveals a failure of conscious visual perception, it is at least partly due to limitations at central stages of information processing. Do all attentional limits to conscious perception have their locus at this central bottleneck? To address this question, here we investigated whether RB is affected by online response selection, a cognitive operation that requires central processing. The results indicate that, unlike the AB, RB does not result from central resource limitations. Evidently, temporal attentional limits to conscious perception can occur at multiple stages of information processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号