首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1572篇
  免费   307篇
  国内免费   99篇
  1978篇
  2025年   5篇
  2024年   27篇
  2023年   39篇
  2022年   41篇
  2021年   63篇
  2020年   89篇
  2019年   110篇
  2018年   123篇
  2017年   113篇
  2016年   80篇
  2015年   61篇
  2014年   85篇
  2013年   258篇
  2012年   57篇
  2011年   57篇
  2010年   58篇
  2009年   51篇
  2008年   61篇
  2007年   72篇
  2006年   58篇
  2005年   62篇
  2004年   57篇
  2003年   42篇
  2002年   41篇
  2001年   37篇
  2000年   24篇
  1999年   29篇
  1998年   22篇
  1997年   22篇
  1996年   18篇
  1995年   12篇
  1994年   16篇
  1993年   13篇
  1992年   13篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   7篇
  1986年   3篇
  1985年   7篇
  1984年   7篇
  1983年   3篇
  1982年   2篇
  1980年   2篇
  1979年   4篇
  1978年   1篇
  1977年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有1978条查询结果,搜索用时 0 毫秒
181.
    
Humans can extract a great deal of information about others very quickly. This is partly because the face automatically captures observers’ attention. Specifically, the eyes can attract overt attention. Although it has been reported that not only the eyes but also the nose can capture initial oculomotor movement in Eastern observers, its generalizability remains unknown. In this study, we applied the “don’t look” paradigm wherein participants are asked not to fixate on a specific facial region (i.e., eyes, nose, and mouth) during an emotion recognition task with upright (Experiment 1) and inverted (Experiment 2) faces. In both experiments, we found that participants were less able to inhibit the initial part of their fixations to the nose, which can be interpreted as the nose automatically capturing attention. Along with previous studies, our overt attention tends to be attracted by a part of the face, which is the nose region in Easterner observers.  相似文献   
182.
    
Accurate analysis of data is vital to the validation of interventions. As such, there has been a recent increase in studies evaluating visual analysis training procedures. However, past investigations have not evaluated direct and indirect visual analysis training methods with matched instructional content that was systematically designed. Furthermore, training has rarely included assessment of generalization and maintenance of visual analysis skills. The purpose of the current dissertation study was to compare the effectiveness and efficiency of (a) computer‐based training, (b) lecture formats with and (c) without the opportunity to pause, and (d) a no‐training group to teach visual analysis of AB graphs to university students. To make these formats directly comparable, the instructional content was equated by ensuring information and examples were identical across the three training procedures. Eighty‐three students were randomly assigned to one of the four groups. Results showed that all three training formats produced increases in accurate responding compared to the no‐training group. Visual analysis skills generalized to novel graphs and maintained approximately 2 weeks following all trainings. These results suggest that structured approaches that are carefully designed to train visual analysis are effective and lead to gains that generalize and maintain in the absence of training.  相似文献   
183.
    
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   
184.
    
Autism spectrum disorder (ASD) is characterized by difficulties in the social domain, but also by hyper- and hypo-reactivity. Atypical visual behaviours and processing have often been observed. Nevertheless, several similar signs are also identified in other clinical conditions including cerebral visual impairments (CVI). In the present study, we investigated emotional face categorization in groups of children with ASD and CVI by comparing each group to typically developing individuals (TD) in two tasks. Stimuli were either non-filtered or filtered by low- and high-spatial frequencies (LSF and HSF). All participants completed the autism spectrum quotient score (AQ) and a complete neurovisual evaluation. The results show that while both clinical groups presented difficulties in the emotional face recognition tasks and atypical processing of filtered stimuli, they did not differ from one another. Additionally, autistic traits were observed in the CVI group and symmetrically, some visual disturbances were present in the ASD group as measured via the AQ score and a neurovisual evaluation, respectively. The present study suggests the relevance of comparing ASD to CVI by showing that emotional face categorization difficulties should not be solely considered as autism-specific but merit investigation for potential dysfunction of the visual processing neural network. These results are of interest in both clinical and research perspectives, indicating that systematic visual examination is warranted for individuals with ASD.  相似文献   
185.
    
Human visual attention is biased to rapidly detect threats in the environment so that our nervous system can initiate quick reactions. The processes underlying threat detection (and how they operate under cognitive load), however, are still poorly understood. Thus, we sought to test the impact of task-irrelevant threatening stimuli on the salience network and executive control of attention during low and high cognitive load. Participants were exposed to neutral or threatening pictures (with moderate and high arousal levels) as task-irrelevant distractors in near (parafoveal) and far (peripheral) positions while searching for numbers in ascending order in a matrix array. We measured reaction times and recorded eye-movements. Our results showed that task-irrelevant distractors primarily influenced behavioural measures during high cognitive load. The distracting effect of threatening images with moderate arousal level slowed reaction times for finding the first number. However, this slowing was offset by high arousal threatening stimuli, leading to overall shorter search times. Eye-tracking measures showed that participants fixated threatening pictures more later and for shorter durations compared to neutral images. Together, our results indicate a complex relationship between threats and attention that results not in a unitary bias but in a sequence of effects that unfold over time.  相似文献   
186.
    
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage.  相似文献   
187.
    
Multiple object tracking (MOT) requires visually attending to dynamically moving targets and distractors. This cognitive ability is based on perceptual-attentional processes that are also involved in goal-directed movements. This study aimed to test the hypothesis that MOT affects the motor performance of aiming movements. Therefore, the participants performed pointing movements using their fingers or a computer mouse that controlled the movements of a cursor directed at the targets in the MOT task. The precision of the pointing movements was measured, and it was predicted that a higher number of targets and distractors in the MOT task would result in a lower pointing precision. The results confirmed this hypothesis, indicating that MOT might influence the performance of motor actions. The potential factors underlying this influence are discussed.  相似文献   
188.
An ongoing debate in political psychology is about whether small wording differences have outsized behavioral effects. A leading example is whether subtle linguistic cues embedded in voter mobilization messages dramatically increase turnout. An initial study analyzing two small‐scale field experiments argued that describing someone as a voter (noun) instead of one who votes (verb) increases turnout rates 11 to 14 points because the noun activates a person's social identity as a voter. A subsequent study analyzing a large‐scale field experiment challenged this claim and found no effect. But questions about the initial claim's domain of applicability persist. The subsequent study may not have reproduced the conditions necessary for the psychological phenomenon to occur, specifically the electoral contexts were not competitive or important enough for the social identity to matter. To address the first of these critiques, as well as other potential explanations for different results between the first two studies, we conduct a large‐scale replication field experiment. We find no evidence that this minor wording change increases turnout levels. This research provides new evidence that the strategy of invoking the self does not appear to consistently increase turnout and calls into question whether subtle linguistic cues have outsized behavioral effects.  相似文献   
189.
    
In safety-critical domains, frequently intentions need to be delayed until an ongoing task is completed. Research using the delay-execute paradigm showed that interruptions during the delay cause forgetting. However, staff members often handle an initial distraction not by interrupting the ongoing task but by acknowledging the distraction or multitasking. In Experiments 1a and 1b, we observed that, compared to a no distraction condition, multitasking significantly decreased remembering of intentions and interrupting decreased remembering even further. In Experiment 2, interruptions with context change reduced remembering of intentions compared to uninterrupted delays, and at the same time, interruptions without context change improved memory performance compared to uninterrupted delays. However, improved memory performance resulted in decreased interrupting task performance. Theoretically, the results support the contextual cueing mechanism of delay-execute tasks. Considering safety-critical domains, multitasking, interruptions and context changes can contribute to forgetting of tasks.  相似文献   
190.
    
You may find some images easier to remember than others. Recent studies of visual memory have found remarkable levels of consistency for this inter-item variability across observers, suggesting that memorability can be considered an intrinsic image property. The current study replicated and extended previous results, while adopting a more traditional visual long-term memory task with retention intervals of 20 min, one day, and one week, as opposed to the previously used repeat-detection task, which typically relied on short retention intervals (5 min). Our memorability rank scores show levels of consistency across observers in line with those reported in previous research. They correlate strongly with previous quantifications and appear stable over time. Furthermore, we show that the way consistency of memorability scores increases with the number of responses per image follows the Spearman–Brown formula. Interestingly, our results also seem to show an increase in consistency with an increase in retention interval. Supported by simulated data, this effect is attributed to a decrease of extraneous influences on recognition over time. Finally, we also provide evidence for a log-linear, rather than linear, decline of the raw memorability scores over time, with more memorable images declining less strongly.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号