首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

2.
Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does similar precisely mean? Can we predict the amount of attention that a display color will receive during a search for a given target color? To tackle this question, two color‐search experiments measuring the selectivity of saccadic eye movements and mapping out its underlying color space were conducted. A variety of mathematical models, predicting saccadic selectivity for given target and display colors, were devised and evaluated. The results suggest that applying a Gaussian function to a weighted Euclidean distance in a slightly modified HSI color space is the best predictor of saccadic selectivity in the chosen paradigm. Hue and intensity information by itself provides a basis for useful predictors, spanning a possibly spherical color space of saccadic selectivity. Although the current models cannot predict saccadic selectivity values for a wide variety of visual search tasks, they reveal some characteristics of color search that are of both theoretical and applied interest, such as for the design of human–computer interfaces.  相似文献   

3.
Previous research indicates that visual attention can be automatically captured by sensory inputs that match the contents of visual working memory. However, Woodman and Luck (2007) showed that information in working memory can be used flexibly as a template for either selection or rejection according to task demands. We report two experiments that extend their work. Participants performed a visual search task while maintaining items in visual working memory. Memory items were presented for either a short or long exposure duration immediately prior to the search task. Memory was tested by a change-detection task immediately afterwards. On a random half of trials items in memory matched either one distractor in the search task (Experiment 1) or three (Experiment 2). The main result was that matching distractors speeded or slowed target detection depending on whether memory items were presented for a long or short duration. These effects were more in evidence with three matching distractors than one. We conclude that the influence of visual working memory on visual search is indeed flexible but is not solely a function of task demands. Our results suggest that attentional capture by perceptual inputs matching information in visual working memory involves a fast automatic process that can be overridden by a slower top-down process of attentional avoidance.  相似文献   

4.
We analysed, under laboratory test conditions, how German cockroach larvae oriented their outgoing foraging trip from their shelter. Our results stressed the importance of external factors, like availability and spatial distribution of food sources, in the choice of a foraging strategy within their home range. When food sources were randomly distributed, larvae adopted a random food search strategy. When food distribution was spatially predictable and reliable, cockroaches were able to relate the presence of food with a landmark during a 3-day training period and to develop an oriented search strategy. Cockroaches were able to associate learned spatial information about their home range to the presence of food resources and then to improve their foraging efficiency. However, conflict experiments revealed that detection of food odour overrode learned landmark cues. Received: 16 October 1999 / Accepted after revision: 18 July 2000  相似文献   

5.
This study investigated the influence of culture on people's sensory responses, such as smell, taste, sound and touch to visual stimuli. The sensory feelings of university students from four countries (Japan, South Korea, Britain and France) to six images were evaluated. The images combined real and abstract objects and were presented on a notebook computer. Overall, 280 participants (144 men and 136 women; n = 70/country) were included in the statistical analysis. The chi‐square independence analysis showed differences and similarities in the sensory responses across countries. Most differences were detected in smell and taste, whereas few variations were observed for sound responses. Large variations in the response were observed for the abstract coral and butterfly images, but few differences were detected in response to the real leaf image. These variations in response were mostly found in the British and Japanese participants.  相似文献   

6.
Roberson D  Pak H  Hanley JR 《Cognition》2008,107(2):752-762
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is unique to Korean, these results are not consistent with a suggestion made by Drivonikou [Drivonikou, G. V., Kay, P., Regier, T., Ivry, R. B., Gilbert, A. L., Franklin, A. et al. (2007) Further evidence that Whorfian effects are stronger in the right visual field than in the left. Proceedings of the National Academy of Sciences 104, 1097-1102] that CP effects in the left visual field provide evidence for the existence of a set of universal colour categories. Dividing Korean participants into fast and slow responders demonstrated that fast responders show CP only in the right visual field while slow responders show CP in both visual fields. We argue that this finding is consistent with the view that CP in both visual fields is verbally mediated by the left hemisphere language system.  相似文献   

7.
Mou W  Fan Y  McNamara TP  Owen CB 《Cognition》2008,106(2):750-769
Three experiments investigated the roles of intrinsic directions of a scene and observer's viewing direction in recognizing the scene. Participants learned the locations of seven objects along an intrinsic direction that was different from their viewing direction and then recognized spatial arrangements of three or six of these objects from different viewpoints. The results showed that triplets with two objects along the intrinsic direction (intrinsic triplets) were easier to recognize than triplets with two objects along the study viewing direction (non-intrinsic triplets), even when the intrinsic triplets were presented at a novel test viewpoint and the non-intrinsic triplets were presented at the familiar test viewpoint. The results also showed that configurations with the same three or six objects were easier to recognize at the familiar test viewpoint than other viewpoints. These results support and develop the model of spatial memory and navigation proposed by Mou, McNamara, Valiquette, and Rump [Mou, W., McNamara, T. P., Valiquiette C. M., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 142-157].  相似文献   

8.
Attention capacity and task difficulty in visual search   总被引:1,自引:0,他引:1  
Huang L  Pashler H 《Cognition》2005,94(3):B101-B111
When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of search task: difficult feature search (with a subtle featural difference), difficult conjunction search, and spatial-configuration search. In all 3 tasks, each trial contained sixteen items, divided into two eight-item sets. The two sets were presented either successively or simultaneously. Comparison of accuracy in successive versus simultaneous presentations revealed that attentional capacity limitations are present only in the case of spatial-configuration search. While the other two types of task were inefficient (as reflected in steep search slopes), no capacity limitations were evident. We conclude that the difficulty of a visual search task affects search efficiency but does not necessarily introduce attentional capacity limits.  相似文献   

9.
Schizophrenia-spectrum disorders are characterized by deficits in social domains. Extant research has reported an impaired ability to perceive emotional faces in schizophrenia. Yet, it is unclear if these deficits occur already in the access to visual awareness. To investigate this question, 23 people with schizophrenia or schizoaffective disorder and 22 healthy controls performed a breaking continuous flash suppression task with fearful, happy, and neutral faces. Response times were analysed with generalized linear mixed models. People with schizophrenia-spectrum disorders were slower than controls in detecting faces, but did not show emotion-specific impairments. Moreover, happy faces were detected faster than neutral and fearful faces, across all participants. Although caution is needed when interpreting the main effect of group, our findings may suggest an elevated threshold for visual awareness in schizophrenia-spectrum disorders, but an intact implicit emotion perception. Our study provides a new insight into the mechanisms underlying emotion perception in schizophrenia-spectrum disorders.  相似文献   

10.
Adams WJ 《Cognition》2008,107(1):137-150
Faced with highly complex and ambiguous visual input, human observers must rely on prior knowledge and assumptions to efficiently determine the structure of their surroundings. One of these assumptions is the 'light-from-above' prior. In the absence of explicit light-source information, the visual system assumes that the light-source is roughly overhead. A simple, low-cost strategy would place this 'light-from-above' prior in a retinal frame of reference. A more complex, but optimal strategy would be to assume that the light-source is gravitationally up, and compensate for observer orientation. Evidence to support one or other strategy from psychophysics and neurophysiology has been mixed. This paper pits the gravitational and retinal frames against each other in two different visual tasks that relate to the light-from-above prior. In the first task, observers had to report the presence or absence of a target where distractors and target were defined purely by shading. In the second task, observers made explicit shape judgements of similar stimuli. The orientation of the stimuli varied across trials and the observer's head was fixed at 0, +/-45 or +/-60 degrees . In both tasks the retinal frame of reference dominated. Visual search behaviour with shape-from-shading stimuli (SFS) was modulated purely by stimulus orientation relative to the retina. However, the gravitational frame of reference had a significant effect on shape judgements, with a 30% correction for observer orientation. In other words, shading information is processed quite differently depending on the demands of the current task. When a 'quick and dirty' representation is required to drive fast, efficient search, that is what the visual system provides. In contrast, when the task is to explicitly estimate shape, extra processing to compensate for head orientation precedes the perceptual judgment. These results are consistent with current neurophysiological data on SFS if we re-frame compensation for observer orientation as a cue-combination problem.  相似文献   

11.
The question of whether cognition can influence perception has a long history in neuroscience and philosophy. Here, we outline a novel approach to this issue, arguing that it should be viewed within the framework of top-down information-processing. This approach leads to a reversal of the standard explanatory order of the cognitive penetration debate: we suggest studying top-down processing at various levels without preconceptions of perception or cognition. Once a clear picture has emerged about which processes have influences on those at lower levels, we can re-address the extent to which they should be considered perceptual or cognitive. Using top-down processing within the visual system as a model for higher-level influences, we argue that the current evidence indicates clear constraints on top-down influences at all stages of information processing; it does, however, not support the notion of a boundary between specific types of information-processing as proposed by the cognitive impenetrability hypothesis.  相似文献   

12.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

13.
ABSTRACT

Attention is known to be sensitive to the temporal structure of scenes. We initially tested whether feature synchrony, an attribute with potential special status because of its association with objecthood, is something which draws attention. Search items were surrounded by colours which periodically changed either in synchrony or out-of synchrony with periodic changes in their shape. Search for a target was notably faster when the target location contained a unique synchronous feature change amongst asynchronous changes. However, the reverse situation produced no search advantage. A second experiment showed that this effect of unique synchrony was actually a consequence of the lower rate of perceived flicker in the synchronous compared to the asynchronous items, not the synchrony itself. In our displays it seems that attention is drawn towards a location which has a relatively low rate of change. Overall, the pattern of results suggested the attentional bias we find is for relative temporal stability. Results stand in contrast to other work which has found high and low flicker rates to both draw attention equally [Cass, J., Van der Burg, E., & Alais, D. (2011). Finding flicker: Critical differences in temporal frequency capture attention. Frontiers in Psychology, 2, 320]. Further work needs to determine the exact conditions under which this bias is and is not found when searching in complex dynamically-changing displays.  相似文献   

14.
Recent research has shown that, in visual search, participants can miss 30–40% of targets when they only appear rarely (i.e., on 1–2% of trials). Low target prevalence alters the behaviour of the searcher. It can lead participants to quit their search prematurely (Wolfe, Horowitz, & Kenner, 2005), to shift their decision criteria (Wolfe et al., 2007), and/or to make motor or response errors (Fleck & Mitroff, 2007). In this paper we examine whether the low prevalence (LP) effect can be ameliorated if we split the search set in two, spreading the task out over space and/or time. Observers searched for the letter “T” among “L”s. In Experiment 1, the left or right half of the display was presented to the participants before the second half. In Experiment 2, items were spatially intermixed but half of the items were presented first, followed by the second half. Experiment 3 followed the methods of Experiment 2 but allowed observers to correct perceived errors. All three experiments produced robust LP effects with higher errors at 2% prevalence than at 50% prevalence. Dividing up the display had no beneficial effect on errors. The opportunity to correct errors reduced but did not eliminate the LP effect. Low prevalence continues to elevate errors even when observers are forced to slow down and permitted to correct errors.  相似文献   

15.
Current theories assume that there is substantial overlap between visual working memory (VWM) and visual attention functioning, such that active representations in VWM automatically act as an attentional set, resulting in attentional biases towards objects that match the mnemonic content. Most evidence for this comes from visual search tasks in which a distractor similar to the memory interferes with the detection of a simultaneous target. Here we provide additional evidence using one of the most popular paradigms in the literature for demonstrating an active attentional set: The contingent spatial orienting paradigm of Folk and colleagues. This paradigm allows memory-based attentional biases to be more directly attributed to spatial orienting. Experiment 1 demonstrated a memory-contingent spatial attention effect for colour but not for shape contents of VWM. Experiment 2 tested the hypothesis that the placeholders used for spatial cueing interfered with the shape processing, and showed that memory-based attentional capture for shape returned when placeholders were removed. The results of the present study are consistent with earlier findings from distractor interference paradigms, and provide additional evidence that biases in spatial orienting contribute to memory-based influences on attention.  相似文献   

16.
This research evaluated infants’ facial expressions as they viewed pictures of possible and impossible objects on a TV screen. Previous studies in our lab demonstrated that four-month-old infants looked longer at the impossible figures and fixated to a greater extent within the problematic region of the impossible shape, suggesting they were sensitive to novel or unusual object geometry. Our work takes studies of looking time data a step further, determining if increased looking co-occurs with facial expressions associated with increased visual interest and curiosity, or even puzzlement and surprise. We predicted that infants would display more facial expressions consistent with either “interest” or “surprise” when viewing the impossible objects relative to possible ones, which would provide further evidence of increased perceptual processing due to incompatible spatial information. Our results showed that the impossible cubes evoked both longer looking times and more reactive expressions in the majority of infants. Specifically, the data revealed significantly greater frequency of raised eyebrows, widened eyes and returns to looking when viewing impossible figures with the most robust effects occurring after a period of habituation. The pattern of facial expressions were consistent with the “interest” family of facial expressions and appears to reflect infants’ ability to perceive systematic differences between matched pairs of possible and impossible objects as well as recognize novel geometry found in impossible objects. Therefore, as young infants are beginning to register perceptual discrepancies in visual displays, their facial expressions may reflect heightened attention and increased information processing associated with identifying irreconcilable contours in line drawings of objects. This work further clarifies the ongoing formation and development of early mental representations of coherent 3D objects.  相似文献   

17.
IntroductionA visual metaphor is an image depicting an incongruity in the spatial distribution of visual elements, which is not consistent with reality. In this paper, we present an overview of theoretical and empirical studies on visual metaphor processing.ObjectiveFirstly, attention is given to the spatial distribution of visual elements, which defines the type of visual metaphors as well as the meaning operations required to understand the communicative message. The paper then reviews several empirical studies that collect behavioural measures for assessing visual metaphor processing using questionnaires and exploring the role played by cognitive abilities. In line with the contemporary literature, we then present three semiotic dimensions for visual metaphor processing, namely the expression, conceptualisation, and communication. We then present a pilot study that focusses on these three dimensions. Few research studies have collected behavioural data for visual metaphors processing. This may be due to the lack of theoretical framework in the corresponding field of research. These three semiotic dimensions highlight the value of pursuing research on visual metaphors in the scope of psychology. The presented pilot study could be a starting point for the investigation of visual metaphors in the framework of psychology.ConclusionWe propose future avenues of research that consider the visual structure of metaphors and meaning operations while assessing the influence of cognitive abilities in visual metaphor processing.  相似文献   

18.
The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected to the correct retinal locations. The present study used the Reicher-Wheeler task and an eye-tracker to show that, using stringent methodology, a right over left VF advantage is observed for word recognition, but that no differences were found between the upper and the lower VF for either word or non-word recognition. The results are discussed in terms of the neuroanatomy and perceptual abilities of the upper and the lower VF and implications for other studies of letter-string perception in the upper and the lower VF are presented.  相似文献   

19.
This study investigated how futsal players visually perceived information on angular interpersonal coordination relations, between available sources such as nearest defender, goalkeeper position and ball, when deciding to shoot at goal. Experienced players (n = 180) participated in eighteen, video-recorded futsal matches, during which 32 participants wore an eye tracking device. Forty-five sequences of play were selected and edited from the moment a teammate passed the ball to the shooter, until the moment a shot was undertaken. Independent variables included the angle connecting the shooter to their closest defender and goalkeeper, and it’s rate of change (velocity and variability) during performance. Then eye tracking system (TOBII PRO) was used to examine gaze patterns of shooters during task performance. Findings revealed that: (i) futsal players adapted their gaze patterns differently between key information sources when shooting confirmed as: their closest defender, goalkeeper, ball, and court floor; and (ii), the ball was the information source which was most fixated on, regardless of the characteristics of interpersonal coordination tendencies that emerged when shooting. These findings can be interpreted as evidence of functional perceptual behaviours used to regulate actions needed to ensure precise contact with the ball when shooting at goal. Further, adaptations of fixation patterns, varied between marking defender, goalkeeper, and ball, may provide functional postural orientation to facilitate a successful shot at goal.  相似文献   

20.
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号