首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1088篇
  免费   38篇
  国内免费   14篇
  2023年   6篇
  2022年   9篇
  2021年   34篇
  2020年   38篇
  2019年   36篇
  2018年   27篇
  2017年   56篇
  2016年   53篇
  2015年   47篇
  2014年   65篇
  2013年   332篇
  2012年   22篇
  2011年   86篇
  2010年   40篇
  2009年   65篇
  2008年   55篇
  2007年   38篇
  2006年   26篇
  2005年   19篇
  2004年   28篇
  2003年   18篇
  2002年   13篇
  2001年   6篇
  2000年   2篇
  1999年   6篇
  1998年   3篇
  1995年   2篇
  1993年   1篇
  1991年   1篇
  1986年   1篇
  1985年   1篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1140条查询结果,搜索用时 15 毫秒
921.
While humans rely on vision during navigation, they are also competent at navigating non-visually. However, non-visual navigation over large distances is not very accurate and can accumulate error. Currently, it is unclear whether this accumulation of error is due to the visual estimate of the distance or to the locomotor production of the distance. In a series of experiments, using a blindfolded walking test, we examine whether enhancing the visual estimate of the distance to a previously seen target, through environmental enrichment, visual imagery, or repeated exposure would improve the accuracy of blindfold navigation across different distances. We also attempt to decrease the visual estimate in order to see if the opposite effect would occur. Our results would indicate that manipulation of the static visual distance estimate did not change the navigation accuracy to any great extent. The only condition that improved accuracy was repeated exposure to the environment through practice. These results suggest that error observed during blindfold navigation may be due to the locomotor production of the distance, rather than the visual process.  相似文献   
922.
The anger superiority effect shows that an angry face is detected more efficiently than a happy face. However, it is still controversial whether attentional allocation to angry faces is a bottom-up process or not. We investigated whether the anger superiority effect is influenced by top-down control, especially working memory (WM). Participants remembered a colour and then searched for differently coloured facial expressions. Just holding the colour information in WM did not modulate the anger superiority effect. However, when increasing the probabilities of trials in which the colour of a target face matched the colour held in WM, participants were inclined to direct attention to the target face regardless of the facial expression. Moreover, the knowledge of high probability of valid trials eliminated the anger superiority effect. These results suggest that the anger superiority effect is modulated by top-down effects of WM, the probability of events and expectancy about these probabilities.  相似文献   
923.
It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.  相似文献   
924.
Attentional biases for sadness are integral to cognitive theories of depression, but do not emerge under all conditions. Some researchers have argued that depression is associated with delayed withdrawal from, but not facilitated initial allocation of attention toward, sadness. We compared two types of withdrawal processes in clinically depressed and non-depressed individuals: (1) withdrawal requiring overt eye movements during visual search; and (2) covert disengagement of attention in a modified cueing paradigm. We also examined initial allocation of attention towards emotion on the visual search task, allowing comparison of withdrawal and facilitation processes. As predicted, we found no evidence of facilitated attention towards sadness in depressed individuals. However, we also found no evidence of depression-linked differences in withdrawal of attention from sadness on either task, offering no support for the theory that depression is associated with withdrawal rather than initial facilitation of attention.  相似文献   
925.
Three robot studies on visual prediction are presented. In all of them, a visual forward model is used, which predicts the visual consequences of saccade-like camera movements. This forward model works by remapping visual information between the pre- and postsaccadic retinal images; at an abstract modeling level, this process is closely related to neurons whose visual receptive fields shift in anticipation of saccades. In the robot studies, predictive remapping is used (1) in the context of saccade adaptation, to reidentify target objects after saccades are carried out; (2) for a model of grasping, in which both fixated and non-fixated target objects are processed by the same foveal mechanism; and (3) in a computational architecture for mental imagery, which generates “gripper appearances” internally without real sensory inflow. The robotic experiments and their underlying computational models are discussed with regard to predictive remapping in the brain, transsaccadic memory, and attention. The results confirm that visual prediction is a mechanism that has to be considered in the design of artificial cognitive agents and the modeling of information processing in the human visual system.  相似文献   
926.
The general goal of the study was to identify global and specific components in developmental dyslexia using various manipulations based on the rapid automatization paradigm (RAN). In two experiments, we used both factor analysis and the Rate-and-Amount Model to verify if one (or more) global factor(s) and a variety of specific effects contribute to the naming (and visual search) deficits in children with dyslexia.

Results of Experiment 1 indicated the presence of three global components: pictorial naming, detailed orthographic analysis, and visual search. Pictorial naming is predicated by typical RAN tasks (such as naming colors or objects), independent of set size, but also from a variety of other tasks including Stroop interference conditions. The detailed orthographic analysis factor accounts for naming of orthographic stimuli at high set size. Visual search marked tasks requiring the scanning of visual targets.

Results of Experiment 2 confirmed the separation between the pictorial naming and detailed orthographic analysis factors both in the original sample and in a new group of children. Furthermore, specific effects of frequency, lexicality, and length were shown to contribute to the reading deficit.

Overall, it is proposed that focusing on the profile of both global and specific effects provides a more effective and, at the same time, simpler account of the dyslexics' impairment.  相似文献   
927.
This study investigates the role of acquisition constraints on the short-term retention of spatial configurations in the tactile modality in comparison with vision. It tests whether the sequential processing of information inherent to the tactile modality could account for limitation in short-term memory span for tactual-spatial information. In addition, this study investigates developmental aspects of short-term memory for tactual- and visual-spatial configurations. A total of 144 child and adult participants were assessed for their memory span in three different conditions: tactual, visual, and visual with a limited field of view. The results showed lower tactual-spatial memory span than visual-spatial, regardless of age. However, differences in memory span observed between the tactile and visual modalities vanished when the visual processing of information occurred within a limited field. These results provide evidence for an impact of acquisition constraints on the retention of spatial information in the tactile modality in both childhood and adulthood.  相似文献   
928.
本研究以高兴、愤怒和中性面孔图片为实验材料,采用空间线索任务,借助事件相关电位技术(ERP)探讨低自尊个体注意偏向的内在机制及生理基础,即从电生理的角度,探讨注意偏向的内在机制是反映了注意的快速定向还是注意的解脱困难,亦或是既有快速注意定向又伴随注意的解脱困难。行为数据发现,高低自尊个体在有效提示下的反应显著快于无效提示条件。脑电数据发现,无效提示条件下,愤怒面孔后的靶子比高兴和中性面孔后的靶子在低自尊个体中诱发了更大的P1和更小的N1波幅,有效提示下无显著差异;高自尊个体在N1和P1波幅上无显著结果。晚期P300成分上,无效提示比有效提示诱发了更正的波幅,未发现自尊相关的显著差异。结果表明,低自尊个体对评价性威胁信息(愤怒)的注意偏向是对威胁信息(愤怒)的注意解脱困难。  相似文献   
929.
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process.  相似文献   
930.
It comes as no surprise that viewing a high-resolution photograph through a screen reduces its clarity. Yet when a coarsely quantized (i.e., pixelated) version of the same photo is seen through a screen its clarity is increased. Six experiments investigated this illusion of clarity. First, the illusion was quantified by having participants rate the clarity of quantized images with and without a screen (Experiment 1). Interestingly, the illusion occurs both when the wires of the screen are aligned with the blocks of the quantized image and when they are shifted horizontally and vertically (Experiments 2 and 3), casting doubt on the hypothesis that a local filling-in process is involved. The finding that no illusion occurs when the photo is blurred rather than quantized (Experiment 4) and that the illusion is sharply reduced when visual attention is divided (Experiment 5) argue for an image segmentation process that falsely attributes the edges of the quantized blocks to the screen. Finally, the illusion is larger when participants adopt an active rather than a passive cognitive strategy (Experiment 6), pointing to the importance of cognitive control in the illusion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号