首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 15 毫秒
301.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   
302.
Studies have shown that perceiving another person's gaze shift facilitates responses in the direction of the perceived gaze shift. While it is often assumed that participants in these experiments remain fixated on the cue in the cueing interval, eye gaze is not always recorded to confirm this. The data presented here suggest that the effect of gaze cues on responses to peripheral targets depends on whether participants make eye movements prior to the onset of the target. Participants who were required to fixate showed cueing effects at short cue-target intervals, but no cueing at later intervals. Participants who could look around, often chose to do so, and showed the same positive cueing effects at the shorter interval, but negative cueing effects (suggestive of inhibition of return) at the longer interval.  相似文献   
303.
IntroductionAutomation research has identified the need to monitor operator attentional states in real time as a basis for determining the most appropriate type and level of automated assistance for operators doing complex tasks.ObjectiveThe development of a methodology that is able to detect on-line operator attentional state variations could represent a good starting point to solve this critical issue.ResultsWe present a short review of the literature on different indices of attentional state and discuss a series of experiments that demonstrates the validity and sensitivity of a specific eye movement index: saccadic peak velocity (PV). PV was able to detect variations in mental state while doing complex and ecological tasks, ranging from air traffic control simulated tasks to driving simulator sessions.ConclusionThis research could provide several guidelines for designing adaptive systems (able to allocate tasks between operators and machine in a dynamic way) and early fatigue-and-distraction warning systems to reduce accident risk.  相似文献   
304.
In everyday life, fast identification and processing of threat-related stimuli is of critical importance for survival. Previous studies suggested that spatial attention is automatically allocated to threatening stimuli, such as angry faces. However, in the previous studies the threatening stimuli were not completely irrelevant for the task. In the present study we used saccadic curvature to investigate whether attention is automatically allocated to threatening emotional information. Participants had to make an endogenous saccade up or down while an irrelevant face paired with an object was present in the periphery. The eyes curved away more from the angry faces than from either neutral or happy faces. This effect was not observed when the faces were inverted, excluding the possible role of low-level differences. Since the angry faces were completely irrelevant to the task, the results suggest that attention is automatically allocated to the threatening stimuli, which generates activity in the oculomotor system, and biases behaviour.  相似文献   
305.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   
306.
Across three experiments we sought to determine whether extrafoveally presented emotional faces are processed sufficiently rapidly to influence saccade programming. Two rectangular targets containing a neutral and an emotional face were presented either side of a central fixation cross. Participants made prosaccades towards an abrupt luminosity change to the border of one of the rectangles. The faces appeared 150 ms before or simultaneously with the cue. Saccades were faster towards cued rectangles containing emotional compared to neutral faces even when the rectangles were positioned 12 degrees from the fixation cross. When faces were inverted, the facilitative effect of emotion only emerged in the ?150 ms SOA condition, possibly reflecting a shift from configural to featural face processing. Together the results suggest that the human brain is highly specialized for processing emotional information and responds very rapidly to the brief presentation of expressive faces, even when these are located outside foveal vision.  相似文献   
307.
The study examined whether literal correspondence is necessary for the use of visual features during word recognition and text comprehension. Eye movements were recorded during reading and used to change the colour of dialogue when it was fixated. In symbolically congruent colour conditions, dialogue of female and male characters was shown in orchid and blue, respectively. The reversed assignment was used in incongruent conditions, and no colouring was applied in a control condition. Analyses of oculomotor activity revealed Stroop-type congruency effects during dialogue reading, with shorter viewing durations in congruent than incongruent conditions. Colour influenced oculomotor measures that index the recognition and integration of words, indicating that it influenced multiple stages of language processing.  相似文献   
308.
Incidental memory for parts of scenes was examined in two search experiments and one memory control experiment. Eye movements were recorded during the search experiments and used to select gaze-contingent sections from search scenes for a surprise memory recognition task. Results from the recognition task showed incidental memory was better for sections viewed longer and with multiple fixations. Sections not fixated during search were still recognized above chance as well. Differences in sections did not affect memory performance in a control experiment when viewing time was held constant. These results show that memory for parts of scenes can occur incidentally during search and encoding of tested sections is better with longer viewing time and with multiple fixations.  相似文献   
309.
ABSTRACT

Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.  相似文献   
310.
The present study investigated predictors of age effects in emotion recognition accuracy. Older and younger adults were tested on a battery of cognitive, vision, and affective questionnaires; participants' eyes were also tracked while they completed an emotion recognition task. Older adults were worse at recognising sad, angry, and fearful expressions than younger adults. When controlling for covariates related to emotion recognition accuracy, younger adults still outperformed older adults in recognising anger and sadness. Younger adults tended to pay more attention to the eyes than older adults. Results suggest that age-related gaze patterns in emotion recognition may depend on the specific emotion being recognised and may not generalise across stimuli sets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号