首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   414篇
  免费   52篇
  国内免费   66篇
  532篇
  2023年   3篇
  2022年   10篇
  2021年   13篇
  2020年   25篇
  2019年   23篇
  2018年   26篇
  2017年   30篇
  2016年   35篇
  2015年   15篇
  2014年   28篇
  2013年   113篇
  2012年   11篇
  2011年   16篇
  2010年   10篇
  2009年   17篇
  2008年   24篇
  2007年   16篇
  2006年   22篇
  2005年   13篇
  2004年   11篇
  2003年   11篇
  2002年   8篇
  2001年   12篇
  2000年   7篇
  1999年   1篇
  1998年   6篇
  1997年   1篇
  1996年   4篇
  1995年   6篇
  1994年   3篇
  1993年   1篇
  1992年   4篇
  1989年   1篇
  1988年   1篇
  1986年   1篇
  1985年   2篇
  1983年   1篇
  1977年   1篇
排序方式: 共有532条查询结果,搜索用时 0 毫秒
491.
492.
The present experiment examined the degree to which experience with different stimulus characteristics affects attentional capture, particularly as related to aging. Participants were presented with onset target/color singleton distractor or color singleton target/onset distractor pairs across three experimental sessions. The target/distractor pairs were reversed in the second session such that the target in the first session became the distractor in the second and third sessions. For both young and old adults previous experience with color as a target defining feature influenced oculomotor capture with task-irrelevant color distractors. Experience with sudden onsets had the same effect for younger and older adults, although capture effects were substantially larger for onset than for color distractors. Experience-based capture effects diminished relatively rapidly after target and distractor-defining properties were reversed. The results are discussed in terms of top-down and stimulus-driven effects on age-related differences in attentional control.  相似文献   
493.
The anger superiority effect shows that an angry face is detected more efficiently than a happy face. However, it is still controversial whether attentional allocation to angry faces is a bottom-up process or not. We investigated whether the anger superiority effect is influenced by top-down control, especially working memory (WM). Participants remembered a colour and then searched for differently coloured facial expressions. Just holding the colour information in WM did not modulate the anger superiority effect. However, when increasing the probabilities of trials in which the colour of a target face matched the colour held in WM, participants were inclined to direct attention to the target face regardless of the facial expression. Moreover, the knowledge of high probability of valid trials eliminated the anger superiority effect. These results suggest that the anger superiority effect is modulated by top-down effects of WM, the probability of events and expectancy about these probabilities.  相似文献   
494.
It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.  相似文献   
495.
Skilled (n = 12) and less skilled (n = 12) billiards players participated in 2 experiments in which the relationship between quiet eye duration, expertise, and task complexity was examined in a near and a far aiming task. Quiet eye was defined as the final fixation on the target prior to the initiation of movement. In Experiment 1, skilled performers exhibited longer fixations on the target (quiet eye) during the preparation phase of the action than their less skilled counterparts did. Quiet eye duration increased as a function of shot difficulty and was proportionally longer on successful than on unsuccessful shots for both groups of participants. In Experiment 2, participants executed shots under 3 different time-constrained conditions in which quiet eye periods were experimentally manipulated. Shorter quiet eye periods resulted in poorer performance, irrespective of participant skill level. The authors argue that quiet eye duration represents a critical period for movement programming in the aiming response.  相似文献   
496.
Gazing behavior of 10 three-month-old twin infants (five male and five female) and their mothers during play, bottle feeding, and spoon feeding activities were analyzed. Video-tape equipment was used in the home; data were gathered as naturalistically as possible. Mothers looked at infants for a greater percentage of the total time and for longer durations than infants looked at mothers. A consistency-activation personality theory in which mothers are highly motivated to gaze at infants, but infants seek visual interest by looking away from mother, is suggested to interpret the findings. Both looking and not-looking gazes and mean and median measures of central tendency were shown to be helpful and necessary for the gazing analysis.  相似文献   
497.
Visual letter search performance was investigated in a group of dyslexic adult readers using a task that required detection of a cued letter target embedded within a random five-letter string. Compared to a group of skilled readers, dyslexic readers were significantly slower at correctly identifying targets located in the first and second string position, illustrating significantly reduced leftward facilitation than is typically observed. Furthermore, compared to skilled readers, dyslexic readers showed reduced sensitivity to positional letter frequency. They failed to exhibit significantly faster response times to correctly detect target letters appearing in the most, compared to least, frequent letter position within five-letter words, and response times correlated with positional letter frequency only for the initial, and not the final, letter position. These results are compatible with the SERIOL (sequential encoding regulated by inputs to oscillations within letter units) model of orthographic processing proposed by Whitney and Cornelissen (2005). Furthermore, they suggest that dyslexic readers are less efficient than skilled readers at learning to extract statistical regularities from orthographic input.  相似文献   
498.
Do voluntary and task-driven shifts of attention have the same time course? In order to measure the time needed to voluntarily shift attention, we devised several novel visual search tasks that elicited multiple sequential attentional shifts. Participants could only respond correctly if they attended to the right place at the right time. In control conditions, search tasks were similar but participants were not required to shift attention in any order. Across five experiments, voluntary shifts of attention required 200–300 ms. Control conditions yielded estimates of 35–100 ms for task-driven shifts. We suggest that the slower speed of voluntary shifts reflects the “clock speed of free will”. Wishing to attend to something takes more time than shifting attention in response to sensory input.  相似文献   
499.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   
500.
Repeated contexts allow us to find relevant information more easily. Learning such contexts has been proposed to depend upon either global processing of the repeated contexts, or alternatively processing of the local region surrounding the target information. In this study, we measured the extent to which observers were by default biased to process towards a more global or local level. The findings showed that the ability to use context to help guide their search was strongly related to an observer's local/global processing bias. Locally biased people could use context to help improve their search better than globally biased people. The results suggest that the extent to which context can be used depends crucially on the observer's attentional bias and thus also to factors and influences that can change this bias.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号