首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 15 毫秒
141.
The aim of this study was to analyze and compare the effects of different types of digital billboard advertisements (DBAs) on drivers’ performance and attention allocation. Driver distraction is a major threat to driver safety. DBAs are one form of distraction in drivers’ outside environment. There are many different types of DBAs, such as static images, changing images, or videos. However, it is not clear to what extent each of these contributes to driver distraction. A total of 100 students participated in a controlled driving simulator experiment in an urban environment. Measures of driving performance were collected, as well as eye tracking and EEG as windows into attention allocation. The different types of DBAs investigated were static (a single image), transitioning (one static DBA replaces another), and animated (short videos). The statistical analysis demonstrated that there were significant differences in the effect of each type of DBA on drivers' performance (deviation from the center of the lane and reaction time), visual attention to the road (percent of fixations on the road, percent of fixations on DBAs, fixation duration on DBAs, and number of gazes on DBAs), and the EEG theta band and beta band. These results show that driving performance and attention to the road were both more negatively affected when drivers were exposed to transitioning and animated DBAs as compared to static DBAs. The results of this study provide guidance for the better design and regulation of DBAs in order to minimize driver distraction.  相似文献   
142.
The majority of the collisions at grade crossings occurred at flashing-light-controlled grade crossings. Understanding drivers’ behaviors and visual performances in the process of approaching the crossings is the foundation of improving crossing safety. This study aims at utilizing driving simulation and eye tracking systems to investigate the efficacy of improved traffic signs and pavement markings (PSM) at flashing-light-controlled grade crossings. The improved signs and markings were modeled in a driving simulation system and tested with a series of flashing light trigger time (FLTT) ranging from 2 s to 6 s with 1 s interval increment. Foggy conditions and drivers’ genders and vocations were considered in experiment design. Thirty-six fully-licensed drivers between 30 and 48 years participated in the experiment. Several eye-movement and behavioral measures were adopted as reflections of the subjects’ performances, including the first fixation time on signs and signals and distance to stop line, total fixation duration, compliance rate, stop position, average speed at the stop line, maximum deceleration rate and brake response time. Results showed that compared with traditional grade crossings signs and pavement markings, drivers could perceive signs timelier and fixate on the flashing-light signal earlier in PSM, especially in the scenarios of earlier FLTTs. The improvement in fixation performance and sign design contributed to a higher stop compliance rate. Importantly, it was found that drivers would hesitate to decide whether to stop or cross facing with flashing red lights, which is similar to the dilemma zone of roadway intersections. Drivers were more likely to fall into the dilemma zones when FLTT was <4 s. When FLTT was 2 s, it was particularly difficult to stop in front of the stop line. Moreover, under a foggy condition, drivers had a difficulty in searching signs and had a longer brake response time compared with a clear condition. For the characteristics of drivers, male drivers had longer fixation duration on signs than females. Professional drivers had a higher maximum deceleration rate compared with non-professional drivers. Above findings implied that improved traffic signs and markings would have a potential to improve traffic safety and deserve a field implementation in the future.  相似文献   
143.
Although eye tracking has been used extensively to assess cognitions for static stimuli, recent research suggests that the link between gaze and cognition may be more tenuous for dynamic stimuli such as videos. Part of the difficulty in convincingly linking gaze with cognition is that in dynamic stimuli, gaze position is strongly influenced by exogenous cues such as object motion. However, tests of the gaze-cognition link in dynamic stimuli have been done on only a limited range of stimuli often characterized by highly organized motion. Also, analyses of cognitive contrasts between participants have been mostly been limited to categorical contrasts among small numbers of participants that may have limited the power to observe more subtle influences. We, therefore, tested for cognitive influences on gaze for screen-captured instructional videos, the contents of which participants were tested on. Between-participant scanpath similarity predicted between-participant similarity in responses on test questions, but with imperfect consistency across videos. We also observed that basic gaze parameters and measures of attention to centers of interest only inconsistently predicted learning, and that correlations between gaze and centers of interest defined by other-participant gaze and cursor movement did not predict learning. It, therefore, appears that the search for eye movement indices of cognition during dynamic naturalistic stimuli may be fruitful, but we also agree that the tyranny of dynamic stimuli is real, and that links between eye movements and cognition are highly dependent on task and stimulus properties.  相似文献   
144.
Infants’ early visual preferences for faces, and their observational learning abilities, are well-established in the literature. The current study examines how infants’ attention changes as they become increasingly familiar with a person and the actions that person is demonstrating. The looking patterns of 12- (n = 61) and 16-month-old infants (n = 29) were tracked while they watched videos of an adult presenting novel actions with four different objects three times. A face-to-action ratio in visual attention was calculated for each repetition and summarized as a mean across all videos. The face-to-action ratio increased with each action repetition, indicating that there was an increase in attention to the face relative to the action each additional time the action was demonstrated. Infant’s prior familiarity with the object used was related to face-to-action ratio in 12-month-olds and initial looking behavior was related to face-to-action ratio in the whole sample. Prior familiarity with the presenter, and infant gender and age, were not related to face-to-action ratio. This study has theoretical implications for face preference and action observations in dynamic contexts.  相似文献   
145.
Displays of eye movements may convey information about cognitive processes but require interpretation. We investigated whether participants were able to interpret displays of their own or others' eye movements. In Experiments 1 and 2, participants observed an image under three different viewing instructions. Then they were shown static or dynamic gaze displays and had to judge whether it was their own or someone else's eye movements and what instruction was reflected. Participants were capable of recognizing the instruction reflected in their own and someone else's gaze display. Instruction recognition was better for dynamic displays, and only this condition yielded above chance performance in recognizing the display as one's own or another person's (Experiments 1 and 2). Experiment 3 revealed that order information in the gaze displays facilitated instruction recognition when transitions between fixated regions distinguish one viewing instruction from another. Implications of these findings are discussed.  相似文献   
146.
Many fatal accidents that involve pedestrians occur at road crossings, and are attributed to a breakdown of communication between pedestrians and drivers. Thus, it is important to investigate how forms of communication in traffic, such as eye contact, influence crossing decisions. Thus far, there is little information about the effect of drivers’ eye contact on pedestrians’ perceived safety to cross the road. Existing studies treat eye contact as immutable, i.e., it is either present or absent in the whole interaction, an approach that overlooks the effect of the timing of eye contact. We present an online crowdsourced study that addresses this research gap. 1835 participants viewed 13 videos of an approaching car twice, in random order, and held a key whenever they felt safe to cross. The videos differed in terms of whether the car yielded or not, whether the car driver made eye contact or not, and the times when the driver made eye contact. Participants also answered questions about their perceived intuitiveness of the driver’s eye contact behavior. The results showed that eye contact made people feel considerably safer to cross compared to no eye contact (an increase in keypress percentage from 31% to 50% was observed). In addition, the initiation and termination of eye contact affected perceived safety to cross more strongly than continuous eye contact and a lack of it, respectively. The car’s motion, however, was a more dominant factor. Additionally, the driver’s eye contact when the car braked was considered intuitive, and when it drove off, counterintuitive. In summary, this study demonstrates for the first time how drivers’ eye contact affects pedestrians’ perceived safety as a function of time in a dynamic scenario and questions the notion in recent literature that eye contact in road interactions is dispensable. These findings may be of interest in the development of automated vehicles (AVs), where the driver of the AV might not always be paying attention to the environment.  相似文献   
147.
Skilled performance in sport often relies on looking at the right place at the right time. Differences in visual behaviour can thus characterise expertise. The current study examined visual attention associated with surfing expertise. Expert (n = 12) and novice (n = 12) surfers viewed 360-degree surfing videos in a head-mounted display. Eye-gaze, presence, and engagement were measured. Experts were faster to detect approaching high, and low waves, spent more time overall attending to high-performance value areas-of-interest (AOIs; pocket, shoulder, lip), and were more physically engaged. Group differences were not found for presence or simulator sickness. Outcomes show that surfing expertise is associated with more optimal visual attention to cues informing wave approach and wave dynamics. Experts look at these areas earlier than novices, and for more time overall. The findings suggest the performance advantages of early planning of motor actions, along with moment-to-moment adjustments while surfing.  相似文献   
148.
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains — equation solving, reading, and visual search — and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.  相似文献   
149.
Recently, Scott, O'Donnell and Sereno reported that words of high valence and arousal are processed with greater ease than neutral words during sentence reading. However, this study unsystematically intermixed emotion (label a state of mind, e.g., terrified or happy) and emotion-laden words (refer to a concept that is associated with an emotional state, e.g., debt or marriage). We compared the eye-movement record while participants read sentences that contained a neutral target word (e.g., chair) or an emotion word (no emotion-laden words were included). Readers were able to process both positive (e.g., happy) and negative emotion words (e.g., distressed) faster than neutral words. This was true across a wide range of early (e.g., first fixation durations) and late (e.g., total times on the post-target region) measures. Additional analyses revealed that State Trait Anxiety Inventory scores interacted with the emotion effect and that the emotion effect was not due to arousal alone.  相似文献   
150.
This study looked for evidence of biases in the allocation and disengagement of attention in dysphoric individuals. Participants studied images for a recognition memory test while their eye fixations were tracked and recorded. Four image types were presented (depression-related, anxiety-related, positive, neutral) in each of two study conditions. For the simultaneous study condition, four images (one of each type) were presented simultaneously for 10 seconds, and the number of fixations and the total fixation time to each image was measured, similar to the procedure used by Eizenman et al. (2003) and Kellough, Beevers, Ellis, and Wells (2008). For the sequential study condition, four images (one of each type) were presented consecutively, each for 4 seconds; to measure disengagement of attention an endogenous cuing procedure was used (Posner, 1980). Dysphoric individuals spent significantly less time attending to positive images than non-dysphoric individuals, but there were no group differences in attention to depression-related images. There was also no evidence of a dysphoria-related bias in initial shifts of attention. With respect to the disengagement of attention, dysphoric individuals were slower to disengage their attention from depression-related images. The recognition memory data showed that dysphoric individuals had poorer memory for emotional images, but there was no evidence of a conventional mood-congruent memory bias. Differences in the attentional and memory biases observed in depressed and dysphoric individuals are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号