首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1058篇
  免费   35篇
  国内免费   3篇
  1096篇
  2023年   6篇
  2022年   8篇
  2021年   33篇
  2020年   37篇
  2019年   35篇
  2018年   25篇
  2017年   53篇
  2016年   53篇
  2015年   43篇
  2014年   60篇
  2013年   325篇
  2012年   19篇
  2011年   82篇
  2010年   38篇
  2009年   60篇
  2008年   55篇
  2007年   38篇
  2006年   25篇
  2005年   19篇
  2004年   28篇
  2003年   18篇
  2002年   13篇
  2001年   6篇
  2000年   1篇
  1999年   4篇
  1998年   4篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1991年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1096条查询结果,搜索用时 0 毫秒
931.
Recent research has shown that, in visual search, participants can miss 30–40% of targets when they only appear rarely (i.e., on 1–2% of trials). Low target prevalence alters the behaviour of the searcher. It can lead participants to quit their search prematurely (Wolfe, Horowitz, & Kenner, 2005), to shift their decision criteria (Wolfe et al., 2007), and/or to make motor or response errors (Fleck & Mitroff, 2007). In this paper we examine whether the low prevalence (LP) effect can be ameliorated if we split the search set in two, spreading the task out over space and/or time. Observers searched for the letter “T” among “L”s. In Experiment 1, the left or right half of the display was presented to the participants before the second half. In Experiment 2, items were spatially intermixed but half of the items were presented first, followed by the second half. Experiment 3 followed the methods of Experiment 2 but allowed observers to correct perceived errors. All three experiments produced robust LP effects with higher errors at 2% prevalence than at 50% prevalence. Dividing up the display had no beneficial effect on errors. The opportunity to correct errors reduced but did not eliminate the LP effect. Low prevalence continues to elevate errors even when observers are forced to slow down and permitted to correct errors.  相似文献   
932.
ABSTRACT

When participants search the same letter display repeatedly for different targets we might expect performance to improve on each subsequent search as they memorize characteristics of the display. However, here we find that search performance improved from a first search to a second search but not for a third search of the same display. This is predicted by a simple model that supports search with only a limited capacity short-term memory for items in the display. To support this model we show that a short-term memory recency effect is present in both the second and the third search. The magnitude of these effects is the same in both searches and as a result there is no additional benefit from the second to the third search.  相似文献   
933.
Social stimuli, like faces (Kanwisher, McDermott, & Chun, 1997) or bodies (Downing, Jiang, Shuman, & Kanwisher, 2001), engage specific areas within the visual cortex. Behavioural research reveals an attentional bias to these same stimuli (Ro, Friggel, & Lavie, 2007). The current study examined whether there is an attentional bias towards hands, and whether such a bias is distinct from any bias towards human bodies. In a two-alternative, forced-choice dot-probe task, participants saw two side-by-side pictures for 500 ms. A probe dot then appeared on either side and participants indicated where the dot appeared. Participants were significantly faster to respond when the probe location coincided with the location of pictures of bodies, hands, or feet, compared to dogs, starfish, hand tools, toaster ovens, inverted hands, or inverted bodies. Results suggest an attentional bias to bodies and body parts but found no evidence of a difference in attentional advantage of hands over bodies or feet.  相似文献   
934.
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance of the Contextual Guidance Model (CGM; Torralba, Oliva, Castelhano, & Henderson, 2006; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009), even though our model does not perform scene recognition or compute global image statistics. The search performance of our model can be further improved by combining it with the CGM.  相似文献   
935.
The detection of emotional expression is important particularly when the expression is directed towards the viewer. Therefore, we conjectured that the efficiency in visual search for deviant emotional expression is modulated by gaze direction, which is one of the primary clues for encoding the focus of social attention. To examine this hypothesis, two visual search tasks were conducted. In Emotional Face Search, the participants were required to detect an emotional expression amongst distractor faces with neutral expression; in Neutral Face Search they were required to detect a neutral target among emotional distractors. The results revealed that target detection was accelerated when the target face had direct gaze compared to averted gaze for fearful, angry, and neutral targets, but no effect of distractor gaze direction was observed. An additional experiment including multiple display sizes has shown a shallower search slope in search for a target face with direct gaze than that with averted gaze, indicating that the advantage of a target face with direct gaze is attributable to efficient orientation of attention towards target faces. These results indicate that direct gaze facilitates detection of target face in visual scenery even when gaze discrimination is not the primary task at hand.  相似文献   
936.
In daily life, people frequently need to observe dynamic objects and temporarily maintain their representations in visual working memory (VWM). The present study explored the mechanism underlying the binding between perceptual features and locations of dynamic objects in VWM. In three experiments, we measured and compared the memory performance for feature-location binding of multiple dynamic and static objects. The results showed that the feature-location binding was impaired for the dynamic objects compared with the static objects. The impairment persisted when the global spatial configuration of the objects remained intact during the motion, as well as when the binding task was relatively easy, such as binding between single-feature objects and coarse locations. The results indicate that object features and locations are not maintained in VWM as well-integrated object files; rather, the formation of feature-location binding may require additional processes, which are disrupted by the constant change of locations in dynamic circumstances. We propose a consolidation process as possible underlying mechanism, and discuss factors that may influence the strength of feature-location binding in dynamic circumstances.  相似文献   
937.
ABSTRACT

Objective: The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods: We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results: OA were less accurate than YA at identifying fear (p < .05, r = .44) and more accurate at identifying disgust (p < .05, r = .39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p values < .05, r values ≥ .38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion: We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition.  相似文献   
938.
A large literature suggests that the organization of words in semantic memory, reflecting meaningful relations among words and the concepts to which they refer, supports many cognitive processes, including memory encoding and retrieval, word learning, and inferential reasoning. The co‐activation of related items has been proposed as a mechanism by which semantic knowledge influences cognition, and contemporary accounts of semantic knowledge propose that this co‐activation is graded—that it depends on how strongly related the items are in semantic memory. Prior research with adults yielded evidence supporting this prediction; however, there is currently no evidence of graded co‐activation early in development. This study provides the first evidence that in children the co‐activation of related items depends on their relational strength in semantic memory. Participants (N = 84, age range: 3–9 years) were asked to identify a target (e.g., bone) amid distractors. Children's responses were slowed down by the presence of a related distractor (e.g., puppy) relative to unrelated distractors (e.g., flower)—suggesting that children co‐activated related items upon hearing the name of the target. Importantly, the degree of this co‐activation was predicted by the strength of the target–distractor relation, such that distractors more strongly related to the targets slowed down children to a larger extent. These findings have important implications for understanding how organized semantic knowledge affects other cognitive processes across development.  相似文献   
939.
Multiscreening, the simultaneous usage of multiple screens, is a relatively understudied phenomenon that may have a large impact on media effects. First, we explored people's viewing behavior while multiscreening by means of an eye‐tracker. Second, we examined people's reporting of attention, by comparing eye‐tracker and self‐reported attention measures. Third, we assessed the effects of multiscreening on people's memory, by comparing people's memory for editorial and advertising content when multiscreening (television–tablet) versus single screening. The results of the experiment (N = 177) show that (a) people switched between screens 2.5 times per minute, (b) people were capable of reporting their own attention, and (c) multiscreeners remembered content just as well as single screeners, when they devoted sufficient attention to the content.  相似文献   
940.
People tend to grossly overestimate the size of their mirror-reflected face. Although this overestimation bias is robust, not much is known about its relationships to self-face perception. In two experiments, we investigated the overestimation bias as a function of the presentation of the own face (left–right reversed – as in a mirror – or nonreversed – as in a photograph), the identity of the seen face, and prior exposure to a real mirror. For this we developed a computerized task requiring size estimations of displayed faces. We replicated the observation that people overestimate the size of their mirror-reflected face and showed that the overestimation can be reduced following a brief mirror exposure. We also found that left–right reversal modulates the overestimation bias, depending on the perceived face’s identity. These data underline the enhanced familiarity of left–right reversed self-faces and the importance of size perception for understanding mirror reflection processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号