首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The appearance and disappearance of an object in the visual field is accompanied by changes to multiple visual features at the object's location. When features at a location change asynchronously, the cue of common onset and offset becomes unreliable, with observers tending to report the most recent pairing of features. Here, we use these last feature reports to study the conditions that lead to a new object representation rather than an update to an existing representation. Experiments 1 and 2 establish that last feature reports predominate in asynchronous displays when feature durations are brief. Experiments 3 and 4 demonstrate that these reports also are critically influenced by whether features can be grouped using nontemporal cues such as common shape or location. The results are interpreted within the object-updating framework (Enns, Lleras, & Moore, 2010), which proposes that human vision is biased to represent a rapid image sequence as one or more objects changing over time.  相似文献   

2.
Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2–4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search.  相似文献   

3.
The visual system represents object shapes in terms of intermediate-level parts. The minima rule proposes that the visual system uses negative minima of curvature to define boundaries between parts. We used visual search to test whether part structures consistent with the minima rule are computed preattentively--or at least, rapidly and early in visual processing. The results of Experiments 1 and 2 showed that whereas the search for a non-minima-segmented shape is fast and efficient among minima-segmented shapes, the reverse search is slow and inefficient. This asymmetry is expected if parsing at negative minima occurs obligatorily. The results of Experiments 3 and 4 showed that although both minima- and non-minima-segmented shapes pop out among unsegmented shapes, the search for minima-segmented shapes is significantly slower. Together, these results demonstrate that the visual system segments shapes into parts, using negative minima of curvature, and that it does so rapidly in early stages of visual processing.  相似文献   

4.
Skilled readers of Chinese participated in sorting and visual search experiments. The sorting results showed that under conditions of conflicting information about structure and component, subjective judgments of the visual similarity among characters were based on the characters' overall configurations (i.e., structures) rather than on the common components the characters possessed. In visual search, both structure and component contributed to the visual similarity reflected by the search efficiency. The steepest search slopes (thus the most similar target-distractor pairs) were found when the target and the distractor characters had the same structure and shared 1 common component, compared with when they had different structures and/or shared no common components. Results demonstrated that character structure plays a greater role in the visual similarity of Chinese characters than has been considered.  相似文献   

5.
Found A  Müller HJ 《Perception》2001,30(1):21-48
Six visual search experiments were carried out to investigate the processing of size information in early vision. The apparent size of display items was manipulated independently of their retinal size by placing items on a textured surface which altered the perceived distance in depth of the items. Overall, these experiments demonstrate that a target item differing from non-target items in terms of apparent size can be detected efficiently. However, the pattern of results indicates that, rather than deriving apparent-size information, target detection is guided by discontinuities in the 'retinal-size gradient' of items, in particular between items at the same 'depth'. Although the arrangement of items on the texture surface strongly influenced search, this was largely due to the retinal size of items and the retinal separation between items. The implications of these experiments for the nature of the pre-attentive representation of size are discussed.  相似文献   

6.
In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search. We evaluated this proposal by testing three predictions. First, Experiments 1 and 2 demonstrate that preview inhibition is more effective when the number of previewed distractors is below VWM capacity than above; an effect that can only be observed at small preview set sizes (Experiment 2A) and when observers are allowed to move their eyes freely (Experiment 2B). Second, Experiment 3 shows that, when quantified as the number of inhibited distractors, the magnitude of the preview effect is stable across different search difficulties. Third, Experiment 4 demonstrates that individual differences in preview inhibition are correlated with individual differences in VWM capacity. These findings provide converging evidence that VWM supports the inhibition of previewed distractors. More generally, these findings demonstrate how VWM contributes to the efficiency of human visual information processing--VWM prioritizes new information by inhibiting old information from being reselected for attention.  相似文献   

7.
The extraction of three-dimensional shape from shading is one of the most perceptually compelling, yet poorly understood, aspects of visual perception. In this paper, we report several new experiments on the manner in which the perception of shape from shading interacts with other visual processes such as perceptual grouping, preattentive search (“pop-out”), and motion perception. Our specific findings are as follows: (1) The extraction of shape from shading information incorporates at least two “assumptions” or constraints—first,that there is a single light source illuminating the whole scene, and second, that the light is shining from “above” in relation to retinal coordinates. (2) Tokens defined by shading can serve as a basis for perceptual grouping and segregation. (3) Reaction time for detecting a single convex shape does not increase with the number of items in the display. This “pop-out” effect must be based on shading rather than on differences in luminance polarity, since neither left-right differences nor step changes in luminance resulted in pop-out. (4) When the subjects were experienced, there were no search asymmetries for convex as opposed to concave tokens, but when the subjects were naive, cavities were much easier to detect than convex shapes. (5) The extraction of shape from shading can also provide an input to motion perception. And finally, (6) the assumption of “overhead illumination” that leads to perceptual grouping depends primarily on retinal rather than on “phenomenal” or gravitational coordinates. Taken collectively, these findings imply that the extraction of shape from shading is an “early” visual process that occurs prior to perceptual grouping, motion perception, and vestibular (as well as “cognitive”) correction for head tilt. Hence, there may be neural elements very early in visual processing that are specialized for the extraction of shape from shading.  相似文献   

8.
The extraction of three-dimensional shape from shading is one of the most perceptually compelling, yet poorly understood, aspects of visual perception. In this paper, we report several new experiments on the manner in which the perception of shape from shading interacts with other visual processes such as perceptual grouping, preattentive search ("pop-out"), and motion perception. Our specific findings are as follows: (1) The extraction of shape from shading information incorporates at least two "assumptions" or constraints--first, that there is a single light source illuminating the whole scene, and second, that the light is shining from "above" in relation to retinal coordinates. (2) Tokens defined by shading can serve as a basis for perceptual grouping and segregation. (3) Reaction time for detecting a single convex shape does not increase with the number of items in the display. This "pop-out" effect must be based on shading rather than on differences in luminance polarity, since neither left-right differences nor step changes in luminance resulted in pop-out. (4) When the subjects were experienced, there were no search asymmetries for convex as opposed to concave tokens, but when the subjects were naive, cavities were much easier to detect than convex shapes. (5) The extraction of shape from shading can also provide an input to motion perception. And finally, (6) the assumption of "overhead illumination" that leads to perceptual grouping depends primarily on retinal rather than on "phenomenal" or gravitational coordinates. Taken collectively, these findings imply that the extraction of shape from shading is an "early" visual process that occurs prior to perceptual grouping, motion perception, and vestibular (as well as "cognitive") correction for head tilt. Hence, there may be neural elements very early in visual processing that are specialized for the extraction of shape from shading.  相似文献   

9.
How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or absent. Visual search efficiency does not change after hundreds of trials through an unchanging scene (Experiment 1). Memory search, in contrast, begins inefficiently but becomes efficient with practice. Given a choice between vision and memory, observers choose vision (Experiments 2 and 3). However, if forced to use their memory on some trials, they learn to use memory on all trials, even when reliable visual information remains available (Experiment 4). The results suggest that observers make a pragmatic choice between vision and memory, with a strong bias toward visual search even for memorized stimuli.  相似文献   

10.
Attentional priming and visual search in pigeons   总被引:2,自引:0,他引:2  
Advance information about a target's identity improved visual search efficiency in pigeons. Experiments 1 and 2 compared information supplied by visual cues with information supplied by trial sequences. Reaction times (RTs) were lower when visual cues signaled a single target rather than two. RTs were (Experiment 1) or accuracy improved (Experiment 2) when a sequence of trials presented a single target rather than a mixture of 2. Experiments 3, 4, and 5 considered the selectivity of visual priming by introducing probe trials that reversed the usual cue-target relationship. RT was higher following such miscues than following the usual 1- or 2- target cuing relationships (Experiment 3); the miscuing effect persisted over variations in the target's concealment (Experiments 4 and 5), but did not occur when the target was presented alone (Experiment 4). The findings indicate that priming modifies an attentional mechanism and suggest that this effect accounts for search images.  相似文献   

11.
Basketball jump shooting is controlled online by vision   总被引:1,自引:0,他引:1  
An experiment was conducted to examine whether basketball jump shooting relies on online visual (i.e., dorsal stream-mediated) control rather than motor preprogramming. Seventeen expert basketball players (eight males and nine females) performed jump shots under normal vision and in three conditions in which movement initiation was delayed by zero, one, or two seconds relative to viewing the basket. Shots were evaluated in terms of both outcome and execution measures. Even though most shots still landed near the basket in the absence of vision, end-point accuracy was significantly better under normal visual conditions than under the delay conditions, where players tended to undershoot the basket. In addition, an overall decrease of inter-joint coordination strength and stability was found as a function of visual condition. Although these results do not exclude a role of motor preprogramming, they demonstrate that visual sensory information plays an important role in the continuous guidance of the basketball jump shot.  相似文献   

12.
Earlier research on visual occlusion showed some flexibility in the formation of visual completions, as long as the structural aspects (e.g., symmetry) of the visible part of the partly occluded shape were preserved in the completion (de Wit & van Lier, ). In this study, we examined whether changing the overall size of the occluded shape would preserve the overall structure. In Experiment 1, using the primed‐matching paradigm, we found evidence for relative size invariance in the completion process. To investigate whether changes in the structural aspects of shape are generally more salient than those of size, we employed the same stimuli in visual search and change detection paradigms. Experiment 2 demonstrated effects of completion in both paradigms. Experiment 3 showed that the metrical aspects of the shapes used in Experiment 1 are nevertheless detected faster than the structural aspects under search conditions. Because the variation in structural aspects is not more salient than in metrical aspects, we conclude that for these shapes, visual completion is indeed size‐invariant. The relations between performances in the three paradigms are discussed.  相似文献   

13.
Some things look more complex than others. For example, a crenulate and richly organized leaf may seem more complex than a plain stone. What is the nature of this experience—and why do we have it in the first place? Here, we explore how object complexity serves as an efficiently extracted visual signal that the object merits further exploration. We algorithmically generated a library of geometric shapes and determined their complexity by computing the cumulative surprisal of their internal skeletons—essentially quantifying the “amount of information” within each shape—and then used this approach to ask new questions about the perception of complexity. Experiments 1–3 asked what kind of mental process extracts visual complexity: a slow, deliberate, reflective process (as when we decide that an object is expensive or popular) or a fast, effortless, and automatic process (as when we see that an object is big or blue)? We placed simple and complex objects in visual search arrays and discovered that complex objects were easier to find among simple distractors than simple objects are among complex distractors—a classic search asymmetry indicating that complexity is prioritized in visual processing. Next, we explored the function of complexity: Why do we represent object complexity in the first place? Experiments 4–5 asked subjects to study serially presented objects in a self-paced manner (for a later memory test); subjects dwelled longer on complex objects than simple objects—even when object shape was completely task-irrelevant—suggesting a connection between visual complexity and exploratory engagement. Finally, Experiment 6 connected these implicit measures of complexity to explicit judgments. Collectively, these findings suggest that visual complexity is extracted efficiently and automatically, and even arouses a kind of “perceptual curiosity” about objects that encourages subsequent attentional engagement.  相似文献   

14.
Recent research suggests that there is an advantage for processing configural information in scenes and objects. The purpose of this study was to investigate the extent to which attention may account for this configural advantage. In Experiment 1, we found that cueing the location of change in single object displays improved detection performance for both configural and shape changes, yet cueing attention away from the location of change was detrimental only for shape change detection. A configural advantage was present for each cueing condition. Experiments 2A and 2B examined whether the configural advantage persisted in conditions where attention was distributed more widely, using a visual search paradigm. Although searches for configural changes were faster than those for shape changes across all set sizes, both types of information appeared to be processed with similar efficiency. Overall, these results suggest that the configural advantage is independent of the location or distribution of visual attention.  相似文献   

15.
Recognizing silhouettes and shaded images across depth rotation   总被引:1,自引:0,他引:1  
Hayward WG  Tarr MJ  Corderoy AK 《Perception》1999,28(10):1197-1215
  相似文献   

16.
Prior knowledge on the illumination position   总被引:1,自引:0,他引:1  
Visual perception is fundamentally ambiguous because an infinite number of three-dimensional scenes are consistent with our retinal images. To circumvent these ambiguities, the visual system uses prior knowledge such as the assumption that light is coming from above our head. The use of such assumptions is rational when these assumptions are related to statistical regularities of our environment. In confirmation of previous visual search experiments, we demonstrate here that the assumption on the illumination position is in fact biased to the above-left rather than directly above. This bias to the left reaches 26 degrees on average in a more direct shape discrimination task. Both right-handed and left-handed observers have a similar leftward bias. We discuss the possible origins of this singular bias on the illumination position.  相似文献   

17.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

18.
Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target–distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target–distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.  相似文献   

19.
It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks (e.g., categorization) demonstrate that similarity is dynamic and changes as perceptual information is accumulated (Lamberts, 1998). In three visual search experiments, the time course of target-distractor similarity effects and distractor-distractor similarity effects was examined. A version of the extended generalized context model (EGCM; Lamberts, 1998) provided a good account of the time course of the observed similarity effects, supporting the notion that similarity in search is dynamic. Modeling also indicated that increasing distractor homogeneity influences both perceptual and decision processes by (respectively) increasing the rate at which stimulus features are processed and enabling strategic weighting of stimulus information.  相似文献   

20.
Investigators have proposed that qualitative shapes are the primitive information of spatial vision: They preserve an approximately one-to-one mapping between surfaces, images, and perception. Given their importance, we examined how the visual system recovers these primitives from sparse disparity fields that do not provide sufficient information for their recovery. We hypothesized that the visual system interpolates sparse disparities with planes, resulting in a patchwork approximation of the implicitly defined shapes. We presented observers with stereo displays simulating planar or smooth curved surfaces having different curvatures. The observers' task was to detect whether dots deviated from these surfaces or to discriminate planar from curved or planar from scrambled surfaces. Consistent with our hypothesis, increasing curvature had detrimental effects on observers' performance (Experiments 1-3). Importantly, this patchwork approximation leads to the recovery of the proposed shape primitives, since observers were more accurate at discriminating planar-from-curved than planar-from-scrambled surfaces with matched disparity range (Experiment 4).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号