首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 15 毫秒
51.
In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation.  相似文献   
52.
People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding.  相似文献   
53.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   
54.
While visual saliency may sometimes capture attention, the guidance of eye movements in search is often dominated by knowledge of the target. How is the search for an object influenced by the saliency of an adjacent distractor? Participants searched for a target amongst an array of objects, with distractor saliency having an effect on response time and on the speed at which targets were found. Saliency did not predict the order in which objects in target-absent trials were fixated. The within-target landing position was distributed around a modal position close to the centre of the object. Saliency did not affect this position, the latency of the initial saccade, or the likelihood of the distractor being fixated, suggesting that saliency affects the allocation of covert attention and not just eye movements.  相似文献   
55.
56.
Latent semantic analysis (LSA) and transitional probability (TP), two computational methods used to reflect lexical semantic representation from large text corpora, were employed to examine the effects of word predictability on Chinese reading. Participants' eye movements were monitored, and the influence of word complexity (number of strokes), word frequency, and word predictability on different eye movement measures (first-fixation duration, gaze duration, and total time) were examined. We found influences of TP on first-fixation duration and gaze duration and of LSA on total time. The results suggest that TP reflects an early stage of lexical processing while LSA reflects a later stage.  相似文献   
57.
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.  相似文献   
58.
Autism spectrum disorder (ASD) and typically developed (TD) adult participants viewed pairs of scenes for a simple “spot the difference” (STD) and a complex “which one's weird” (WOW) task. There were no group differences in the STD task. In the WOW task, the ASD group took longer to respond manually and to begin fixating the target “weird” region. Additionally, as indexed by the first-fixation duration into the target region, the ASD group failed to “pick up” immediately on what was “weird”. The findings are discussed with reference to the complex information processing theory of ASD (Minshew & Goldstein, 1998 Minshew, N. J. and Goldstein, G. 1998. Autism as a disorder or complex information processing. Mental Retardation and Developmental Disabilities Research Reviews, 4: 129136. [Crossref] [Google Scholar]).  相似文献   
59.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   
60.
Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern—increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes—was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under “preview benefit”, are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号