首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 15 毫秒
261.
ABSTRACT

Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.  相似文献   
262.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   
263.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   
264.
Prior research has suggested that attention is determined by exploiting what is known about the most valid predictors of outcomes and exploring those stimuli that are associated with the greatest degree of uncertainty about subsequent events. Previous studies of human contingency learning have revealed evidence for one or other of these processes, but differences in the designs and procedures of these studies make it difficult to pinpoint the crucial determinant of whether attentional exploitation or exploration will dominate. Here we present two studies in which we systematically manipulated both the predictiveness of cues and uncertainty regarding the outcomes with which they were associated. This allowed us to demonstrate, for the first time, evidence of both attentional exploration and exploitation within the same experiment. Moreover, while the effect of predictiveness persisted to influence the rate of novel learning about the same cues in a second stage, the effect of uncertainty did not. This suggests that attentional exploration is more sensitive to a change of context than is exploitation. The pattern of data is simulated with a hybrid attentional model.  相似文献   
265.
In the present paper, we investigated whether observation of bodily cues—that is, hand action and eye gaze—can modulate the onlooker's visual perspective taking. Participants were presented with scenes of an actor gazing at an object (or straight ahead) and grasping an object (or not) in a 2?×?2 factorial design and a control condition with no actor in the scene. In Experiment 1, two groups of subjects were explicitly required to judge the left/right location of the target from their own (egocentric group) or the actor's (allocentric group) point of view, whereas in Experiment 2 participants did not receive any instruction on the point of view to assume. In both experiments, allocentric coding (i.e., the actor's point of view) was triggered when the actor grasped the target, but not when he gazed towards it, or when he adopted a neutral posture. In Experiment 3, we demonstrate that the actor's gaze but not action affected participants' attention orienting. The different effects of others' grasping and eye gaze on observers' behaviour demonstrated that specific bodily cues convey distinctive information about other people's intentions.  相似文献   
266.
Task demands and individual differences have been linked reliably to word skipping during reading. Such differences in fixation probability may imply a selection effect for multivariate analyses of eye-movement corpora if selection effects correlate with word properties of skipped words. For example, with fewer fixations on short and highly frequent words the power to detect parafoveal-on-foveal effects is reduced. We demonstrate that increasing the fixation probability on function words with a manipulation of the expected difficulty and frequency of questions reduces an age difference in skipping probability (i.e., old adults become comparable to young adults) and helps to uncover significant parafoveal-on-foveal effects in this group of old adults. We discuss implications for the comparison of results of eye-movement research based on multivariate analysis of corpus data with those from display-contingent manipulations of target words.  相似文献   
267.
Perceived gaze in faces is an important social cue that influences spatial orienting of attention. In three experiments, we examined whether the social relevance of gaze direction modulated spatial interference in response selection, using three different stimuli: faces, isolated eyes, and symbolic eyes (Experiments 1, 2, and 3, respectively). Each experiment employed a variant of the spatial Stroop paradigm in which face location and gaze direction were put into conflict. Results showed a reverse congruency effect between face location to the right or left of fixation and gaze direction only for stimuli with a social meaning to participants (Experiments 1 and 2). The opposite was observed for the nonsocial stimuli used in Experiment 3. Results are explained as facilitation in response to eye contact.  相似文献   
268.
269.
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14–15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2–4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception.  相似文献   
270.
Previous work has found that repetitive auditory stimulation (click trains) increases the subjective velocity of subsequently presented moving stimuli. We ask whether the effect of click trains is stronger for retinal velocity signals (produced when the target moves across the retina) or for extraretinal velocity signals (produced during smooth pursuit eye movements, when target motion across the retina is limited). In Experiment 1, participants viewed leftward or rightward moving single dot targets, travelling at speeds from 7.5 to 17.5 deg/s. They estimated velocity at the end of each trial. Prior presentation of auditory click trains increased estimated velocity, but only in the pursuit condition, where estimates were based on extraretinal velocity signals. Experiment 2 generalized this result to vertical motion. Experiment 3 found that the effect of clicks during pursuit disappeared when participants tracked across a visually textured background that provided strong local motion cues. Together these results suggest that auditory click trains selectively affect extraretinal velocity signals. This novel finding suggests that the cross-modal integration required for auditory click trains to influence subjective velocity operates at later stages of processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号