首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1058篇
  免费   35篇
  国内免费   3篇
  2023年   6篇
  2022年   8篇
  2021年   33篇
  2020年   37篇
  2019年   35篇
  2018年   25篇
  2017年   53篇
  2016年   53篇
  2015年   43篇
  2014年   60篇
  2013年   325篇
  2012年   19篇
  2011年   82篇
  2010年   38篇
  2009年   60篇
  2008年   55篇
  2007年   38篇
  2006年   25篇
  2005年   19篇
  2004年   28篇
  2003年   18篇
  2002年   13篇
  2001年   6篇
  2000年   1篇
  1999年   4篇
  1998年   4篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1991年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1096条查询结果,搜索用时 15 毫秒
991.
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance of the Contextual Guidance Model (CGM; Torralba, Oliva, Castelhano, & Henderson, 2006; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009), even though our model does not perform scene recognition or compute global image statistics. The search performance of our model can be further improved by combining it with the CGM.  相似文献   
992.
The detection of emotional expression is important particularly when the expression is directed towards the viewer. Therefore, we conjectured that the efficiency in visual search for deviant emotional expression is modulated by gaze direction, which is one of the primary clues for encoding the focus of social attention. To examine this hypothesis, two visual search tasks were conducted. In Emotional Face Search, the participants were required to detect an emotional expression amongst distractor faces with neutral expression; in Neutral Face Search they were required to detect a neutral target among emotional distractors. The results revealed that target detection was accelerated when the target face had direct gaze compared to averted gaze for fearful, angry, and neutral targets, but no effect of distractor gaze direction was observed. An additional experiment including multiple display sizes has shown a shallower search slope in search for a target face with direct gaze than that with averted gaze, indicating that the advantage of a target face with direct gaze is attributable to efficient orientation of attention towards target faces. These results indicate that direct gaze facilitates detection of target face in visual scenery even when gaze discrimination is not the primary task at hand.  相似文献   
993.
Humans are sensitive to complexity and regularity in patterns (Falk & Konold, 1997; Yamada, Kawabe, & Miyazaki, 2013). The subjective perception of pattern complexity is correlated to algorithmic (or Kolmogorov-Chaitin) complexity as defined in computer science (Li & Vitányi, 2008), but also to the frequency of naturally occurring patterns (Hsu, Griffiths, & Schreiber, 2010). However, the possible mediational role of natural frequencies in the perception of algorithmic complexity remains unclear. Here we reanalyze Hsu et al. (2010) through a mediational analysis, and complement their results in a new experiment. We conclude that human perception of complexity seems partly shaped by natural scenes statistics, thereby establishing a link between the perception of complexity and the effect of natural scene statistics.  相似文献   
994.
The question of whether words can be identified without spatial attention has been a topic of considerable interest over the last five and a half decades, but the literature has yielded mixed conclusions. The present experiments manipulated the proportion of valid trials (the proportion of trials in which a cue appeared in the same location as the upcoming target word) so as to encourage distributed (50% valid cues; Experiments 1 and 3) or focused (100% valid cues; Experiments 2 and 4) spatial attention in a priming-type paradigm. Participants read aloud a target word, and the impact of a simultaneously presented distractor word was assessed. Semantic and orthographic priming effects were present when conditions promoted distributed spatial attention but absent when conditions promoted focused spatial attention. In contrast, Experiment 5 yielded a distractor word effect in the 100% valid cue condition when subjects identified a colour (Stroop task). We take these results to suggest that (1) spatial attention is a necessary preliminary to visual word recognition and (2) examining the role of spatial attention in the context of the Stroop task may have few implications for basic processes in reading because colour processing makes fewer demands on spatial attention than does visual word recognition.  相似文献   
995.
A visual stimulus may affect a motor response although its visibility is prevented by a mask. This implies that the sensorimotor system is more susceptible to stimulation than the perceptual system. We report data that are contrary to this intuition. Experiments where both the observer's perceptual state related to the presence/absence of a masked stimulus and the motor behaviour elicited by the same stimulus were jointly assessed on a trial-by-trial basis show that masked visual stimulation at constant visibility (d′) has two types of effect on the motor system. When the physical energy of the masked stimulus is weak, it affects the motor response only if it exceeds the observer's perceptual response criterion. It is only when the physical energy of the masked stimulus is relatively strong that its impact on the motor response is independent of the state of the perceptual system. This indicates that reflex, “nonconscious” behaviour has a high energy threshold.  相似文献   
996.
Previous research indicates that visual attention can be automatically captured by sensory inputs that match the contents of visual working memory. However, Woodman and Luck (2007) showed that information in working memory can be used flexibly as a template for either selection or rejection according to task demands. We report two experiments that extend their work. Participants performed a visual search task while maintaining items in visual working memory. Memory items were presented for either a short or long exposure duration immediately prior to the search task. Memory was tested by a change-detection task immediately afterwards. On a random half of trials items in memory matched either one distractor in the search task (Experiment 1) or three (Experiment 2). The main result was that matching distractors speeded or slowed target detection depending on whether memory items were presented for a long or short duration. These effects were more in evidence with three matching distractors than one. We conclude that the influence of visual working memory on visual search is indeed flexible but is not solely a function of task demands. Our results suggest that attentional capture by perceptual inputs matching information in visual working memory involves a fast automatic process that can be overridden by a slower top-down process of attentional avoidance.  相似文献   
997.
Recently, several changes in perception, attention, and visual working memory have been reported when stimuli are near to compared to far from the hands, suggesting that such stimuli receive enhanced scrutiny. A mechanism that inhibits the disengagement of attention from objects near the hands, thus forcing a more thorough inspection, has been proposed to underlie such effects. Up until now, this possibility has been tested only in a limited number of tasks. In the present study we examined whether changes in one's global or local attentional scope are similarly affected by hand proximity. Participants analysed stimuli according to either their global shape or the shape of their constituent local elements while holding their hands near to or far from the stimuli. Switches between global and local processing were markedly slower near the hands, reflecting an attentional mechanism that compels an observer to more fully evaluate objects near their hands by inhibiting changes in attentional scope. Such a mechanism may be responsible for some of the changes observed in other tasks, and reveals the special status conferred to objects near the hands.  相似文献   
998.
Visual search is speeded when the target is repeated from trial to trial compared to when it changes, suggesting that selective attention learns from previous events. Such intertrial effects are stronger when there is more competition for selection, for example in ambiguous displays where the target is accompanied by a salient distractor. Here we investigate whether this is because the competition strengthens the learning itself, or because it allows for a learned representation to exert a greater effect. The results point to the latter. Observers looked for a colour-defined target that could repeat or change from trial to trial. A salient distractor could be present on the current trial, the previous trial, both, or neither. Intertrial effects were greater when a distractor was present on the current trial, suggesting that a primed target representation is more beneficial under conditions of competition. In contrast, distractor presence on the previous trial had no effects whatsoever, indicating that the learning process itself is not affected by competition. This suggests that the source of the learning resides at postselection stages, whereas the effects may occur at the perceptual level.  相似文献   
999.
It comes as no surprise that viewing a high-resolution photograph through a screen reduces its clarity. Yet when a coarsely quantized (i.e., pixelated) version of the same photo is seen through a screen its clarity is increased. Six experiments investigated this illusion of clarity. First, the illusion was quantified by having participants rate the clarity of quantized images with and without a screen (Experiment 1). Interestingly, the illusion occurs both when the wires of the screen are aligned with the blocks of the quantized image and when they are shifted horizontally and vertically (Experiments 2 and 3), casting doubt on the hypothesis that a local filling-in process is involved. The finding that no illusion occurs when the photo is blurred rather than quantized (Experiment 4) and that the illusion is sharply reduced when visual attention is divided (Experiment 5) argue for an image segmentation process that falsely attributes the edges of the quantized blocks to the screen. Finally, the illusion is larger when participants adopt an active rather than a passive cognitive strategy (Experiment 6), pointing to the importance of cognitive control in the illusion.  相似文献   
1000.
Detection of emotional facial expressions has been shown to be more efficient than detection of neutral expressions. However, it remains unclear whether this effect is attributable to visual or emotional factors. To investigate this issue, we conducted two experiments using the visual search paradigm with photographic stimuli. We included a single target facial expression of anger or happiness in presentations of crowds of neutral facial expressions. The anti-expressions of anger and happiness were also presented. Although anti-expressions produced changes in visual features comparable to those of the emotional facial expressions, they expressed relatively neutral emotions. The results consistently showed that reaction times (RTs) for detecting emotional facial expressions (both anger and happiness) were shorter than those for detecting anti-expressions. The RTs for detecting the expressions were negatively related to experienced emotional arousal. These results suggest that efficient detection of emotional facial expressions is not attributable to their visual characteristics but rather to their emotional significance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号