首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Time course of perceptual grouping by color   总被引:1,自引:0,他引:1  
Does perceptual grouping operate early or late in visual processing? One position is that the elements in perceptual layouts are grouped early in vision, by properties of the retinal image, before perceptual constancies have been determined. A second position is that perceptual grouping operates on a postconstancy representation, one that is available only after stereoscopic depth perception, lightness constancy, and amodal completion have occurred. The present experiments indicate that grouping can operate on both a preconstancy representation and a postconstancy representation. Perceptual grouping was based on retinal color similarity at short exposure durations and based on surface color similarity at long durations. These results permit an integration of the preconstancy and postconstancy positions with regard to grouping by color.  相似文献   

2.
Recent research on perceptual grouping is described with particular emphasis on identifying the level(s) at which grouping factors operate. Contrary to the classical view of grouping as an early, two-dimensional, image-based process, recent experimental results show that it is strongly influenced by phenomena related to perceptual constancy, such as binocular depth perception, lightness constancy, amodal completion, and illusory contours. These findings imply that at least some grouping processes operate at the level of phenomenal perception rather than at the level of the retinal image. Preliminary evidence is reported showing that grouping can affect perceptual constancy, suggesting that grouping processes must also operate at an early, preconstancy level. If so, grouping may be a ubiquitous, ongoing aspect of visual organization that occurs for each level of representation rather than as a single stage that can be definitively localized relative to other perceptual processes.  相似文献   

3.
Mitsudo H 《Perception》2003,32(1):53-66
Phenomenal transparency reflects a process which makes it possible to recover the structure and lightness of overlapping objects from a fragmented image. This process was investigated by the visual-search paradigm. In three experiments, observers searched for a target that consisted of gray patches among a variable number of distractors and the search efficiency was assessed. Experiments 1 and 2 showed that the search efficiency was greatly improved when the target was distinctive with regard to structure, based on transparency. Experiment 3 showed that the search efficiency was impaired when a target was not distinctive with regard to lightness (ie perceived reflectance), based on transparency. These results suggest that the shape and reflectance of overlapping objects when accompanied by transparency can be calculated in parallel across the visual field, and can be used as a guide for visual attention.  相似文献   

4.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

5.
任衍具  孙琪 《心理学报》2014,46(11):1613-1627
采用视空工作记忆任务和真实场景搜索任务相结合的双任务范式, 结合眼动技术将搜索过程划分为起始阶段、扫描阶段和确认阶段, 探究视空工作记忆负载对真实场景搜索绩效的影响机制, 同时考查试次间搜索目标是否变化、目标模板的具体化程度以及搜索场景画面的视觉混乱度所起的调节作用。结果表明, 视空工作记忆负载会降低真实场景搜索的成绩, 在搜索过程中表现为视空负载条件下扫描阶段持续时间的延长、注视点数目的增加和空间负载条件下确认阶段持续时间的延长, 视空负载对搜索过程的影响与目标模板的具体化程度有关; 空间负载会降低真实场景搜索的效率, 且与搜索画面的视觉混乱度有关, 而客体负载则不会。由此可见, 视空工作记忆负载对真实场景搜索绩效的影响不同, 空间负载对搜索过程的影响比客体负载更长久, 二者均受到目标模板具体化程度的调节; 仅空间负载会降低真实场景的搜索效率, 且受到搜索场景画面视觉混乱度的调节。  相似文献   

6.
Recent research on perceptual grouping is described with particular emphasis on the level at which grouping factors operate. Contrary to the standard view of grouping as an early, two-dimensional, image-based process, experimental results show that it is strongly influenced by binocular depth perception, lightness constancy, amodal completion, and illusory figures. Such findings imply that at least some grouping processes operate at the level of conscious perception rather than the retinal image. Whether classical grouping processes also operate at an early, preconstancy level is an important, but currently unanswered question.  相似文献   

7.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.  相似文献   

8.
Many models of color constancy assume that the visual system estimates the scene illuminant and uses this estimate to determine an object's color appearance. A version of this illumination-estimation hypothesis, in which the illuminant estimate is associated with the explicitly perceived illuminant, was tested. Observers made appearance matches between two experimental chambers. Observers adjusted the illumination in one chamber to match that in the other and then adjusted a test patch in one chamber to match the surface lightness of a patch in the other. The illumination-estimation hypothesis, as formulated here, predicted that after both matches the luminances of the light reflected from the test patches would be identical. The data contradict this prediction. A second experiment showed that manipulating the immediate surround of a test patch can affect perceived lightness without affecting perceived illumination. This finding also falsifies the illumination-estimation hypothesis.  相似文献   

9.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

10.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

11.
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.  相似文献   

12.
A modified visual search task demonstrates that humans are very good at resuming a search after it has been momentarily interrupted. This is shown by exceptionally rapid response time to a display that reappears after a brief interruption, even when an entirely different visual display is seen during the interruption and two different visual searches are performed simultaneously. This rapid resumption depends on the stability of the visual scene and is not due to display or response anticipations. These results are consistent with the existence of an iterative hypothesis-testing mechanism that compares information stored in short-term memory (the perceptual hypothesis) with information about the display (the sensory pattern). In this view, rapid resumption occurs because a hypothesis based on a previous glance of the scene can be tested very rapidly in a subsequent glance, given that the initial hypothesis-generation step has already been performed.  相似文献   

13.
Contrary to the implication of the term "lightness constancy", asymmetric lightness matching has never been found to be perfect unless the scene is highly articulated (i.e., contains a number of different reflectances). Also, lightness constancy has been found to vary for different observers, and an effect of instruction (lightness vs. brightness) has been reported. The elusiveness of lightness constancy presents a great challenge to visual science; we revisit these issues in the following experiment, which involved 44 observers in total. The stimuli consisted of a large sheet of black paper with a rectangular spotlight projected onto the lower half and 40 squares of various shades of grey printed on the upper half. The luminance ratio at the edge of the spotlight was 25, while that of the squares varied from 2 to 16. Three different instructions were given to observers: They were asked to find a square in the upper half that (i) looked as if it was made of the same paper as that on which the spotlight fell (lightness match), (ii) had the same luminance contrast as the spotlight edge (contrast match), or (iii) had the same brightness as the spotlight (brightness match). Observers made 10 matches of each of the three types. Great interindividual variability was found for all three types of matches. In particular, the individual Brunswik ratios were found to vary over a broad range (from .47 to .85). That is, lightness matches were found to be far from veridical. Contrast matches were also found to be inaccurate, being on average, underestimated by a factor of 3.4. Articulation was found to essentially affect not only lightness, but contrast and brightness matches as well. No difference was found between the lightness and luminance contrast matches. While the brightness matches significantly differed from the other matches, the difference was small. Furthermore, the brightness matches were found to be subject to the same interindividual variability and the same effect of articulation. This leads to the conclusion that inexperienced observers are unable to estimate both the brightness and the luminance contrast of the light reflected from real objects lit by real lights. None of our observers perceived illumination edges purely as illumination edges: A partial Gelb effect ("partial illumination discounting") always took place. The lightness inconstancy in our experiment resulted from this partial illumination discounting. We propose an account of our results based on the two-dimensionality of achromatic colour. We argue that large interindividual variations and the effect of articulation are caused by the large ambiguity of luminance ratios in the stimulus displays used in laboratory conditions.  相似文献   

14.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

15.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

16.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

17.
Features that we have recently attended to strongly influence how we allocate visual attention across a subsequently viewed visual scene. Here, we investigate the characteristics of any such repetition effects during visual search for Gabor patch targets drifting in the odd direction relative to a set of distractors. The results indicate that repetition of motion direction has a strong effect upon subsequent allocation of attention. This was the case for judgments of a target’s presence or absence, of a target’s location, and of the color of a target drifting in the odd direction. Furthermore, distractor repetition on its own can facilitate search performance on subsequent trials, indicating that the benefits of repetition of motion direction are not confined to repetition of target features. We also show that motion direction need not be the target-defining dimension throughout a trial block for motion priming to occur, but that priming can build up with only one presentation of a given target direction, even within blocks of trials where the target may, unpredictably, be defined by a different feature (color, in this case), showing that dimensional-weighting accounts cannot, on their own, account for motion direction priming patterns. Finally, we show by randomizing the set size between trials that priming of motion direction can decrease search rates in visual search.  相似文献   

18.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

19.
The specific gray shades in a visual scene can be derived from relative luminance values only when an anchoring rule is followed. The double-anchoring theory I propose in this article, as a development of the anchoring theory of Gilchrist et al. (1999), assumes that any given region (a) belongs to one or more frameworks, created by Gestalt grouping principles, and (b) is independently anchored, within each framework, to both the highest luminance and the surround luminance. The region's final lightness is a weighted average of the values computed, relative to both anchors, in all frameworks. The new model accounts not only for all lightness illusions that are qualitatively explained by the anchoring theory but also for a number of additional effects, and it does so quantitatively, with the support of mathematical simulations.  相似文献   

20.
Werner A 《Perception》2006,35(9):1171-1184
In real scenes, surfaces in different depth planes often differ in the luminance and chromatic content of their illumination. Scene segmentation is therefore an important issue when considering the compensation of illumination changes in our visual perception (lightness and colour constancy). Chromatic adaptation is an important sensory component of colour constancy and has been shown to be linked to the two-dimensional spatial structure of a scene (Werner, 2003 Vision Research 43 1611 - 1623). Here, the question is posed whether this cooperation also extends to the organisation of a scene in depth. The influence of depth on colour constancy was tested by introducing stereo disparity, whereby the test patch and background were perceived in either the same or one of five different depth planes (1.9-57 min of arc). There were no additional cues to depth such as shadows or specular highlights. For consistent illumination changes, colour constancy was reduced when the test patch and background were separated in depth, indicating a reduction of contextual influences. An interaction was found between the influences of stereo depth and spatial frequency on colour constancy. In the case of an inconsistent illumination change, colour constancy was reduced if the test patch and background were in the same depth plane (2-D condition), but not if they were separated in depth (3-D condition). Furthermore, colour constancy was slightly better in the 3-D inconsistent condition than in the 2-D inconsistent condition. It is concluded that depth segmentation supports colour constancy in scenes with inconsistent illumination changes. Processes of depth segmentation are implemented at an early sensory stage of colour constancy, and they define visual regions within which the effects of illuminant changes are discounted for separately. The results support recent models that posit such implementation of scene segmentation in colour constancy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号