首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 25 毫秒
131.
In diagnostic reasoning, knowledge about symptoms and their likely causes is retrieved to generate and update diagnostic hypotheses in memory. By letting participants learn about causes and symptoms in a spatial array, we could apply eye tracking during diagnostic reasoning to trace the activation level of hypotheses across a sequence of symptoms and to evaluate process models of diagnostic reasoning directly. Gaze allocation on former locations of symptom classes and possible causes reflected the diagnostic value of initial symptoms, the set of contending hypotheses, consistency checking, biased symptom processing in favor of the leading hypothesis, symptom rehearsal, and hypothesis change. Gaze behavior mapped the reasoning process and was not dominated by auditorily presented symptoms. Thus, memory indexing proved applicable for studying reasoning tasks involving linguistic input. Looking at nothing revealed memory activation because of a close link between conceptual and motor representations and was stable even after one week.  相似文献   
132.
Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.  相似文献   
133.
Adaptive cruise control (ACC), a driver assistance system that controls longitudinal motion, has been introduced in consumer cars in 1995. A next milestone is highly automated driving (HAD), a system that automates both longitudinal and lateral motion. We investigated the effects of ACC and HAD on drivers’ workload and situation awareness through a meta-analysis and narrative review of simulator and on-road studies. Based on a total of 32 studies, the unweighted mean self-reported workload was 43.5% for manual driving, 38.6% for ACC driving, and 22.7% for HAD (0% = minimum, 100 = maximum on the NASA Task Load Index or Rating Scale Mental Effort). Based on 12 studies, the number of tasks completed on an in-vehicle display relative to manual driving (100%) was 112% for ACC and 261% for HAD. Drivers of a highly automated car, and to a lesser extent ACC drivers, are likely to pick up tasks that are unrelated to driving. Both ACC and HAD can result in improved situation awareness compared to manual driving if drivers are motivated or instructed to detect objects in the environment. However, if drivers are engaged in non-driving tasks, situation awareness deteriorates for ACC and HAD compared to manual driving. The results of this review are consistent with the hypothesis that, from a Human Factors perspective, HAD is markedly different from ACC driving, because the driver of a highly automated car has the possibility, for better or worse, to divert attention to secondary tasks, whereas an ACC driver still has to attend to the roadway.  相似文献   
134.
Reingold, Reichle, Glaholt, and Sheridan (2012) reported a gaze‐contingent eye‐movement experiment in which survival‐curve analyses were used to examine the effects of word frequency, the availability of parafoveal preview, and initial fixation location on the time course of lexical processing. The key results of these analyses suggest that lexical processing begins very rapidly (after approximately 120 ms) and is supported by substantial parafoveal processing (more than 100 ms). Because it is not immediately obvious that these results are congruent with the theoretical assumption that words are processed and identified in a strictly serial manner, we attempted to simulate the experiment using the E‐Z Reader model of eye‐movement control (Reichle, 2011). These simulations were largely consistent with the empirical results, suggesting that parafoveal processing does play an important functional role by allowing lexical processing to occur rapidly enough to mediate direct control over when the eyes move during reading.  相似文献   
135.
136.
Within the context of more and more autonomous vehicles, an automatic lateral control device (AS: Automatic Steering) was used to steer the vehicle along the road without drivers’ intervention. The device was not able to detect and avoid obstacles. The experiment aimed to analyse unexpected obstacle avoidance manoeuvres when lateral control was delegated to automation. It was hypothesized that drivers skirting behaviours and eye movement patterns would be modified with automated steering compared with a control situation without automation. Eighteen participants took part in a driving simulator study. Steering behaviours and eye movements were analysed during obstacle avoidance episodes. Compared with driving without automation, skirting around obstacles was found to be less effective when drivers had to return from automatic steering to manual control. Eye movements were modified in the presence of automatic steering, revealing further ahead visual scanning of the driving environment. Resuming manual control is not only a problem of action performance but is also related to the reorganisation of drivers’ visual strategies linked to drivers’ disengagement from the steering task. Assistance designers should pay particular attention to potential changes in drivers’ activity when carrying out development work on highly automated vehicles.  相似文献   
137.
There is building evidence that highly socially anxious (HSA) individuals frequently avoid making eye contact, which may contribute to less meaningful social interactions and maintenance of social anxiety symptoms. However, research to date is lacking in ecological validity due to the usage of either static or pre-recorded facial stimuli or subjective coding of eye contact. The current study examined the relationships among trait social anxiety, eye contact avoidance, state anxiety, and participants’ self-perceptions of interaction performance during a live, four-minute conversation with a confederate via webcam, and while being covertly eye-tracked. Participants included undergraduate women who conversed with same-sex confederates. Results indicated that trait social anxiety was inversely related to eye contact duration and frequency averaged across the four minutes, and positively related to state social anxiety and negative self-ratings. In addition, greater anticipatory state anxiety was associated with reduced eye contact throughout the first minute of the conversation. Eye contact was not related to post-task state anxiety or self-perception of poor performance; although, trends emerged in which these relations may be positive for HSA individuals. The current findings provide enhanced support for the notion that eye contact avoidance is an important feature of social anxiety.  相似文献   
138.
Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants’ scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.  相似文献   
139.
Top-down attentional settings can persist between two unrelated tasks, influencing visual attention and performance. This study investigated whether top-down contextual information in a second task could moderate this “attentional inertia” effect. Forty participants searched through letter strings arranged horizontally, vertically, or randomly and then made a judgement about road, nature, or fractal images. Eye movements were recorded to the picture search and findings showed greater horizontal search in the pictures following horizontal letter strings and narrower horizontal search following vertical letter strings, but only in the first 1000 ms. This shows a brief persistence of attentional settings, consistent with past findings. Crucially, attentional inertia did not vary according to image type. This indicates that top-down contextual biases within a scene have limited impact on the persistence of previously relevant, but now irrelevant, attentional settings.  相似文献   
140.
In partially automated vehicles, the driver and the automated system share control of the vehicle. Consequently, the driver may have to switch between driving and monitoring activities. This can critically impact the driver’s situational awareness. The human–machine interface (HMI) is responsible for efficient collaboration between driver and system. It must keep the driver informed about the status and capabilities of the automated system, so that he or she knows who or what is in charge of the driving. The present study was designed to compare the ability of two HMIs with different information displays to inform the driver about the system’s status and capabilities: a driving-centered HMI that displayed information in a multimodal way, with an exocentric representation of the road scene, and a vehicle-centered HMI that displayed information in a more traditional visual way. The impact of these HMIs on drivers was compared in an on-road study. Drivers’ eye movements and response times for questions asked while driving were measured. Their verbalizations during the test were also transcribed and coded. Results revealed shorter response times for questions on speed with the exocentric and multimodal HMI. The duration and number of fixations on the speedometer were also greater with the driving-centered HMI. The exocentric and multimodal HMI helped drivers understand the functioning of the system, but was more visually distracting than the traditional HMI. Both HMIs caused mode confusions. The use of a multimodal HMI can be beneficial and should be prioritized by designers. The use of auditory feedback to provide information about the level of automation needs to be explored in longitudinal studies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号