首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this prediction by recording eye fixations while viewers inspected pictures of room interiors that contained objects with known saliency characteristics. Highly salient objects did attract fixations earlier than less conspicuous objects, but only in a task requiring general encoding of the whole picture. When participants were required to detect the presence of a small target, then the visual saliency of nontarget objects did not influence fixations. These results support modifications of the model that take the cognitive override of saliency into account by allowing task demands to reduce the saliency weights of task-irrelevant objects.

The pictures sometimes contained incongruent objects that were taken from other rooms. These objects were used to test the hypothesis that previous reports of the early fixation of congruent objects have not been consistent because the effect depends upon the visual conspicuity of the incongruent object. There was an effect of incongruency in both experiments, with earlier fixation of objects that violated the gist of the scene, but the effect was only apparent for inconspicuous objects, which argues against the hypothesis.  相似文献   

2.
Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this prediction by recording eye fixations while viewers inspected pictures of room interiors that contained objects with known saliency characteristics. Highly salient objects did attract fixations earlier than less conspicuous objects, but only in a task requiring general encoding of the whole picture. When participants were required to detect the presence of a small target, then the visual saliency of nontarget objects did not influence fixations. These results support modifications of the model that take the cognitive override of saliency into account by allowing task demands to reduce the saliency weights of task-irrelevant objects.

The pictures sometimes contained incongruent objects that were taken from other rooms. These objects were used to test the hypothesis that previous reports of the early fixation of congruent objects have not been consistent because the effect depends upon the visual conspicuity of the incongruent object. There was an effect of incongruency in both experiments, with earlier fixation of objects that violated the gist of the scene, but the effect was only apparent for inconspicuous objects, which argues against the hypothesis.  相似文献   

3.
We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component—for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance.  相似文献   

4.
Greg Davis 《Visual cognition》2013,21(3-5):411-430
Many previous studies have found that we can attend pairs of visual features (e.g., colour, orientation) more efficiently when they belong to the same “object” compared to when they belong to separate, neighbouring objects (e.g., Behrmann, Zemel, & Mozer, 1998; Egly, Rafal, & Driver, 1994). This advantage for attending features from the same object may reflect stronger binding between these features than arises for pairs of features belonging to separate objects. However, recent findings described by Davis, Welch, Holmes, and Shepherd (in press) suggest that under specific conditions this same-object advantage can be reversed, such that attention now spreads more readily between features belonging to separate neighbouring objects than between features of the same object. In such cases it would appear that features belonging to separate visual objects are more strongly bound than features of the same object. Here I review these findings and present the results of a new study. Together these data suggest that magnocellular processes in the human visual system bind together features from separate objects, whereas parvocellular processes bind together features from the same object.  相似文献   

5.
How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15° differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0° (front) and 180° (back), while 45° and 135° yielded poorer results, and 90° (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90° advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0° and 180° advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.  相似文献   

6.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

7.
Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 3 2 array and identified and named a target picture on the basis of either category (e.g., “What is the name of the musical instrument?”) or visual-form (e.g., “What is the name of the circular object?”) instructions. There were more fixations on visualform competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, &; Meyer, 1999).  相似文献   

8.
Recent research suggests that the different components of eye movements (fixations, saccades) are not strictly separate but are interdependent processes. This argument rests on observations that gaze-step sizes yield unimodal distributions and exhibit power-law scaling, indicative of interdependent processes coordinated across timescales. The studies that produced these findings, however, employed complex tasks (visual search, scene perception). Thus, the question is whether the observed interdependence is a fundamental property of eye movements or emerges in the interplay between cognitive processes and complex visual stimuli. In this study, we used a simple eye movement task where participants moved their eyes in a prescribed sequence at several different paces. We outlined diverging predictions for this task for independence versus interdependence of fixational and saccadic fluctuations and tested these predictions by assessing the spectral properties of eye movements. We found no clear peak in the power spectrum attributable exclusively to saccadic fluctuations. Furthermore, changing the pace of the eye movement sequence yielded a global shift in scaling relations evident in the power spectrum, not just a localized shift for saccadic fluctuations. These results support the conclusion that fixations and saccades are interdependent processes.  相似文献   

9.
When tracking spatially extended objects in a multiple object tracking task, attention is preferentially directed to the centres of those objects (attentional concentration), and this effect becomes more pronounced as object length increases (attentional amplification). However, it is unclear whether these effects depend on differences in attentional allocation or differences in eye fixations. We addressed this question by measuring eye fixations in a dual-task paradigm that required participants to track spatially extended objects, while simultaneously detecting probes that appeared at the centres or near the endpoints of objects. Consistent with previous research, we observed concentration and amplification effects: Probes at the centres of objects were detected more readily than those near their endpoints, and this difference increased with object length. Critically, attentional concentration was observed when probes were equated for distance from fixation during free viewing, and concentration and amplification were observed without eye movements during strict fixation. We conclude that these effects reflect the prioritization of covert attention to particular spatial regions within extended objects, and we discuss the role of eye fixations during multiple object tracking.  相似文献   

10.
How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled‐rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks (Gallus gallus) in strictly controlled environments that contained no objects other than a single virtual object, and then measured the speed at which the chicks could recognize that object from familiar and novel viewpoints. The chicks were able to recognize the object rapidly, at presentation rates of 125 ms per image. Further, recognition speed was equally fast whether the object was presented from familiar viewpoints or novel viewpoints (30° and 60° azimuth rotations). Thus, newborn chicks can recognize objects across novel viewpoints within a fraction of a second. These results demonstrate that newborns are capable of both rapid and invariant object recognition at the onset of vision.  相似文献   

11.
Gorillas in our midst: sustained inattentional blindness for dynamic events   总被引:16,自引:0,他引:16  
Simons DJ  Chabris CF 《Perception》1999,28(9):1059-1074
With each eye fixation, we experience a richly detailed visual world. Yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our environment from one view to the next: we often do not detect large changes to objects and scenes ('change blindness'). Furthermore, without attention, we may not even perceive objects ('inattentional blindness'). Taken together, these findings suggest that we perceive and remember only those objects and details that receive focused attention. In this paper, we briefly review and discuss evidence for these cognitive forms of 'blindness'. We then present a new study that builds on classic studies of divided visual attention to examine inattentional blindness for complex objects and events in dynamic scenes. Our results suggest that the likelihood of noticing an unexpected object depends on the similarity of that object to other objects in the display and on how difficult the priming monitoring task is. Interestingly, spatial proximity of the critical unattended object to attended locations does not appear to affect detection, suggesting that observers attend to objects and events, not spatial positions. We discuss the implications of these results for visual representations and awareness of our visual environment.  相似文献   

12.
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.  相似文献   

13.
游旭群  邱香  牛勇 《心理学报》2007,39(2):201-208
采用视觉表象的几何距离扫描任务,通过两个实验首次揭示了视觉表象扫描中的视角大小效应。实验一采用3 (视角:2.7°,5.5°和8.2°) × 3 (扫描距离:0.0cm、0.4cm和0.8cm) 组内实验设计,探讨了视角大小这种表象前加工因素是否影响心理扫描的问题;实验二采用8 (视角:2.7°,4.1°,5.5°,6.9°,8.2°,9.6°,12.3°和17.1°) × 2 (扫描距离:0.4cm和0.8cm) 组内实验设计,探讨了视角大小如何影响心理扫描加工过程的问题。结果表明:(1)在视觉表象扫描中,扫描时间会受到表象对应刺激的视角大小影响,即使扫描的几何距离相等,不同视角大小条件下的扫描时间仍存在显著差异;(2)在4°到10°这个视角范围内心理扫描的时间显著短于这个范围之外的扫描时间,6.5°左右是视觉表象扫描的最佳视角。视角大小效应有别于心理扫描的大小效应和距离效应,为Kosslyn的表象计算理论增加了新的内容,具有重要的理论意义。同时它对仪表、图形设计以及棋牌游戏等工作和生活实践具有一定的应用价值  相似文献   

14.
The utilization of static and kinetic information for depth by Mala?ian children and young adults in making monocular relative size judgments was investigated. Subjects viewed pairs of objects or photographic slides of the same pairs and judged which was the larger of each pair. The sizes and positions of the objects were manipulated such that the more distant object subtended a visual angle equal to, 80% of, or 70% of the nearer object. Motor parallax information was manipulated by allowing or preventing head movement. All subjects displayed sensitivity to static information for depth when the two objects subtended equal visual angles. When the more distant object was larger but subtended a smaller visual angle than the nearer object, subjects tended to base their judgments on retinal size. Motion parallax information increased accuracy of judgments of three-dimensional displays but reduced accuracy of judgments of pictorial displays. Comparisons are made between these results and those for American subjects.  相似文献   

15.
SUN: Top-down saliency using natural statistics   总被引:1,自引:0,他引:1  
Kanan C  Tong MH  Zhang L  Cottrell GW 《Visual cognition》2009,17(6-7):979-1003
When people try to find particular objects in natural scenes they make extensive use of knowledge about how and where objects tend to appear in a scene. Although many forms of such "top-down" knowledge have been incorporated into saliency map models of visual search, surprisingly, the role of object appearance has been infrequently investigated. Here we present an appearance-based saliency model derived in a Bayesian framework. We compare our approach with both bottom-up saliency algorithms as well as the state-of-the-art Contextual Guidance model of Torralba et al. (2006) at predicting human fixations. Although both top-down approaches use very different types of information, they achieve similar performance; each substantially better than the purely bottom-up models. Our experiments reveal that a simple model of object appearance can predict human fixations quite well, even making the same mistakes as people.  相似文献   

16.
Object constancy, the ability to recognize objects despite changes in orientation, has not been well studied in the auditory modality. Dolphins use echolocation for object recognition, and objects ensonified by dolphins produce echoes that can vary significantly as a function of orientation. In this experiment, human listeners had to classify echoes from objects varying in material, shape, and size that were ensonified with dolphin signals. Participants were trained to discriminate among the objects using an 18-echo stimulus from a 10° range of aspect angles, then tested with novel aspect angles across a 60° range. Participants were typically successful recognizing the objects at all angles (M = 78 %). Artificial neural networks were trained and tested with the same stimuli with the purpose of identifying acoustic cues that enable object recognition. A multilayer perceptron performed similarly to the humans and revealed that recognition was enabled by both the amplitude and frequency of echoes, as well as the temporal dynamics of these features over the course of echo trains. These results provide insight into representational processes underlying echoic recognition in dolphins and suggest that object constancy perceived through the auditory modality is likely to parallel what has been found in the visual domain in studies with both humans and animals.  相似文献   

17.
An increasing number of studies are investigating the cognitive processes underlying human–object interactions. For instance, several researchers have manipulated the type of grip associated with objects in order to study the role of the objects’ motor affordances in cognition. The objective of the present study was to develop norms for the types of grip employed when grasping and using objects, with a set of 296 photographs of objects. On the basis of these ratings, we computed measures of agreement to evaluate the extent to which participants agreed about the grip used to interact with these objects. We also collected ratings on the dissimilarity between the grips employed for grasping and for using objects, as well as the number of actions that can typically be performed with the objects. Our results showed grip agreements of 67 % for grasping and of 65 % for using objects. Moreover, our pattern of correlations is highly consistent with the idea that the grips for grasping and using objects represent two different motor dimensions of the objects.  相似文献   

18.
Three experiments are reported that examined the relationship between covert visual attention and a viewer's ability to use extrafoveal visual information during object identification. Subjects looked at arrays of four objects while their eye movements were recorded. Their task was to identify the objects in the array for an immediate probe memory test. During viewing, the number and location of objects visible during given fixations were manipulated. In Experiments 1 and 2, we found that multiple extrafoveal previews of an object did not afford any more benefit than a single extrafoveal preview, as assessed by means of time of fixation on the objects. In Experiment 3, we found evidence for a model in which extrafoveal information acquired during a fixation derives primarily from the location toward which the eyes will move next. The results are discussed in terms of their implications for the relationship between covert visual attention and extrafoveal information use, and a sequential attention model is proposed.  相似文献   

19.
选取35名本科生为被试,采用EyeLink II眼动仪,考察了基础比率和认知风格对贝叶斯推理的影响,探讨了基础比率作用机制的争论。实验采用2(基础比率:高、低)×2(认知风格:场依存、场独立)双因素被试间设计,要求每位被试完成一道贝叶斯推理题,问题内容为疾病情境。将推理材料划为AOI1(描述基础比率)、AOI2(描述击中率)、AOI3(描述虚报率)、AOI4(提问)4个兴趣区,分别记录被试的总注视时间、注视次数等。结果发现:(1)在总注视时间和注视次数指标上,基础比率和认知风格的主效应均不显著,两者的交互作用显著;(2)在总注视时间和回视次数指标上,各兴趣区差异显著,关注程度从高到低依次为:AOI2>AOI3>AOI1>AOI4。这说明,在贝叶斯推理中,并没有完全忽视基础比率;对于不同认知风格的个体而言,基础比率所起的作用是不同的。这也给我们一点启示:或许我们不能单一地去考虑基础比率的作用机制,而应该将它与个体的某些因素结合起来综合加以讨论。  相似文献   

20.
视觉表象操作加工的眼动实验研究   总被引:1,自引:0,他引:1  
张霞  刘鸣 《心理学报》2009,41(4):305-315
本研究通过视觉表象旋转和扫描的眼动实验探讨表象的心理表征方式。实验一结果表明,眼动指标具有与反应时相类似的旋转角度效应。实验二结果显示,表象扫描的反应时和眼动指标都具有与知觉扫描加工一样的距离效应。由此可以认为,表象眼动与知觉眼动模式具有相似性;表象具有相对独立的心理表征方式并有其特殊的加工过程;表象的心理表征可以是形象表征,而非一定是抽象的命题或符号表征  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号