首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 31 毫秒
421.
Two experiments investigated the extent to which value-modulated oculomotor capture is subject to top-down control. In these experiments, participants were never required to look at the reward-related stimuli; indeed, doing so was directly counterproductive because it caused omission of the reward that would otherwise have been obtained. In Experiment 1, participants were explicitly informed of this omission contingency. Nevertheless, they still showed counterproductive oculomotor capture by reward-related stimuli, suggesting that this effect is relatively immune to cognitive control. Experiment 2 more directly tested whether this capture is controllable by comparing the performance of participants who either had or had not been explicitly informed of the omission contingency. There was no evidence that value-modulated oculomotor capture differed between the two conditions, providing further evidence that this effect proceeds independently of cognitive control. Taken together, the results of the present research provide strong evidence for the automaticity and cognitive impenetrability of value-modulated attentional capture.  相似文献   
422.
Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener.  相似文献   
423.
The comparison of fractions is a difficult task that can often be facilitated by separately comparing components (numerators and denominators) of the fractions—that is, by applying so-called component-based strategies. The usefulness of such strategies depends on the type of fraction pair to be compared. We investigated the temporal organization and the flexibility of strategy deployment in fraction comparison by evaluating sequences of eye movements in 20 young adults. We found that component-based strategies could account for the response times and the overall number of fixations observed for the different fraction pairs. The analysis of eye movement sequences showed that the initial eye movements in a trial were characterized by stereotypical scanning patterns indicative of an exploratory phase that served to establish the kind of fraction pair presented. Eye movements that followed this phase adapted to the particular type of fraction pair and indicated the deployment of specific comparison strategies. These results demonstrate that participants employ eye movements systematically to support strategy use in fraction comparison. Participants showed a remarkable flexibility to adapt to the most efficient strategy on a trial-by-trial basis. Our results confirm the value of eye movement measurements in the exploration of strategic adaptation in complex tasks.  相似文献   
424.
Cognitive models assume that social anxiety is associated with and maintained by biased information processing, leading to change in attention allocation, which can be measured by examining eye movement. However, little is known about the distribution of attention among positive, neutral and negative stimuli during a social task and the relative importance of positive versus negative biases in social anxiety. In this study, eye movement, subjective state anxiety and psychophysiology of individuals with high trait social anxiety (HSA) and low trait social anxiety (LSA) were measured during a speech task with a pre-recorded audience. The HSA group showed longer total fixation on negative stimuli and shorter total fixation on positive stimuli compared to the LSA group. We observed that the LSA group shifted attention away from negative stimuli, whereas the HSA group showed no differential attention allocation. The total duration of fixation on negative stimuli predicted subjective anxiety ratings. These results point to a negative bias as well as a lack of a positive bias in HSA individuals during social threat.  相似文献   
425.
Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer‐generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real‐time index of understanding even in dynamic and complex environments, and on a per‐utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi‐modal models of situated interaction.  相似文献   
426.
When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading.  相似文献   
427.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   
428.
Face perception is characterized by a distinct scanpath. While eye movements are considered functional, there has not been direct evidence that disrupting this scanpath affects face recognition performance. The present experiment investigated the influence of an irrelevant letter-search task (with letter strings arranged horizontally, vertically, or randomly) on the subsequent scanning strategies in processing upright and inverted famous faces. Participants’ response time to identify the face and the direction of their eye movements were recorded. The orientation of the letter search influenced saccadic direction when viewing the face images, such that a direct carryover-effect was observed. Following a vertically oriented letter-search task, the recognition of famous faces was slower and less accurate for upright faces, and faster for inverted faces. These results extend the carryover findings of Thompson and Crundall into a novel domain. Crucially they also indicate that upright and inverted faces are better processed by different eye movements, highlighting the importance of scanpaths in face recognition.  相似文献   
429.
When scrutinizing the visual world, complex and unexpected stimuli often lead to prolonged eye fixations to enhance cognitive processing, likely by temporarily suppressing a planned saccade. The present study examined whether the suppression signal is tightly linked to a specific planned saccade and if it conforms to the viewer's intention. A novel Go/No-go task was devised where participants made consecutive saccades to fixate a stimulus appearing across the screen horizontal meridian in 4° steps. At times, the features of the stimulus (colour and/or shape) were altered when it reappeared at a new location. Participants had to suppress the saccade that would otherwise leave the stimulus if its features matched instructed criteria. Saccade suppression was determined by the reduced probability for saccades towards and away from a target stimulus. Results show both correct suppression to saccades leaving the target and erroneous suppression to saccades towards it. The erroneous suppression was initially observed for any change in features but later lifted. The suppression shortened the length of saccades leaving a target but not those towards it. The initial suppression during previewing the target appears to be based on expedited but incomplete evaluation of visual stimulus, and is not linked to any specific saccade. These properties might reflect the stage of ocular decision based on which the suppression signal is generated. They also account for the phenomenon of “peripheral-to-foveal” effect on eye movements in reading.  相似文献   
430.
During reading, saccadic eye movements are produced to move the high acuity foveal region of the eye to words of interest for efficient word processing. Distributions of saccadic landing positions peak close to a word's centre but are relatively broad compared to simple oculomotor tasks. Moreover, landing-position distributions are modulated both by distance of the launch site and by saccade type (e.g., one-step saccade, word skipping, refixation). Here we present a mathematical model for the computation of a saccade intended for a given target word. Two fundamental assumptions are related to (1) the sensory computation of the word centre from inter-word spaces and (2) the integration of sensory information and a priori knowledge using Bayesian estimation. Our model was developed for data from a large corpus of eye movements from normal reading. We demonstrate that the model is able simultaneously to account for a systematic shift of saccadic mean landing position with increasing launch-site distance and for qualitative differences between one-step saccades (i.e., from a given word to the next word) and word-skipping saccades.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号