首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   957篇
  免费   79篇
  国内免费   118篇
  1154篇
  2024年   3篇
  2023年   12篇
  2022年   17篇
  2021年   26篇
  2020年   42篇
  2019年   43篇
  2018年   25篇
  2017年   39篇
  2016年   37篇
  2015年   22篇
  2014年   26篇
  2013年   109篇
  2012年   27篇
  2011年   159篇
  2010年   18篇
  2009年   39篇
  2008年   40篇
  2007年   45篇
  2006年   41篇
  2005年   40篇
  2004年   30篇
  2003年   25篇
  2002年   28篇
  2001年   19篇
  2000年   19篇
  1999年   17篇
  1998年   17篇
  1997年   15篇
  1996年   9篇
  1995年   8篇
  1994年   13篇
  1993年   9篇
  1992年   4篇
  1991年   8篇
  1990年   13篇
  1989年   8篇
  1988年   8篇
  1987年   5篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   4篇
  1982年   4篇
  1981年   10篇
  1980年   7篇
  1979年   20篇
  1978年   6篇
  1977年   13篇
  1976年   5篇
  1975年   1篇
排序方式: 共有1154条查询结果,搜索用时 15 毫秒
961.
Facial expression recognition in a wild situation is a challenging problem in computer vision research due to different circumstances, such as pose dissimilarity, age, lighting conditions, occlusions, etc. Numerous methods, such as point tracking, piecewise affine transformation, compact Euclidean space, modified local directional pattern, and dictionary-based component separation have been applied to solve this problem. In this paper, we have proposed a deep learning–based automatic wild facial expression recognition system where we have implemented an incremental active learning framework using the VGG16 model developed by the Visual Geometry Group. We have gathered a large amount of unlabeled facial expression data from Intelligent Technology Lab (ITLab) members at Inha University, Republic of Korea, to train our incremental active learning framework. We have collected these data under five different lighting conditions: good lighting, average lighting, close to the camera, far from the camera, and natural lighting and with seven facial expressions: happy, disgusted, sad, angry, surprised, fear, and neutral. Our facial recognition framework has been adapted from a multi-task cascaded convolutional network detector. Repeating the entire process helps obtain better performance. Our experimental results have demonstrated that incremental active learning improves the starting baseline accuracy from 63% to average 88% on ITLab dataset on wild environment. We also present extensive results on face expression benchmark such as Extended Cohn-Kanade Dataset, as well as ITLab face dataset captured in wild environment and obtained better performance than state-of-the-art approaches.  相似文献   
962.
Both the movements of people and inanimate objects are intimately bound up with physical causality. Furthermore, in contrast to object movements, causal relationships between limb movements controlled by humans and their body displacements uniquely reflect agency and goal-directed actions in support of social causality. To investigate the development of sensitivity to causal movements, we examined the looking behavior of infants between 9 and 18 months of age when viewing movements of humans and objects. We also investigated whether individual differences in gender and gross motor functions may impact the development of the visual preferences for causal movements. In Experiment 1, infants were presented with walking stimuli showing either normal body translation or a “moonwalk” that reversed the horizontal motion of body translations. In Experiment 2, infants were presented with unperformable actions beyond infants’ gross motor functions (i.e., long jump) either with or without ecologically valid body displacement. In Experiment 3, infants were presented with rolling movements of inanimate objects that either complied with or violated physical causality. We found that female infants showed longer looking times to normal walking stimuli than to moonwalk stimuli, but did not differ in their looking time to movements of inanimate objects and unperformable actions. In contrast, male infants did not show sensitivity to causal movement for either category. Additionally, female infants looked longer at social stimuli of human actions than male infants. Under the tested circumstances, our findings indicate that female infants have developed a sensitivity to causal consistency between limb movements and body translations of biological motion, only for actions with previous visual and motor exposures, and demonstrate a preference toward social information.  相似文献   
963.
考察短式知觉压力量表(PSS-10)在中国大学生中的效度和信度。用PSS-10、一般健康问卷(GHQ-12)、生活取向测验修订版(LOT-R)、一般自我效能量表(GSES)以及Connor-Davidson心理韧性量表(CD-RISC)对1762名大学生进行调查。PSS-10的条目质量良好; 经探索与验证后,量表的潜在结构为稳定的两个因子,与实测数据拟良好; PSS-10的效标关联效度较好。总量表、无助感和自我效能信念的内部一致性系数达到了测量学要求; 两周后其重测信度大于0.6; 它们的问卷辨识系数均大于0.9。短式知觉压力量表在中国大学生中具有良好的信效度,能够作为有效测量大学生领悟或感受到压力的程度。  相似文献   
964.
965.
Somatic symptom disorder (SSD) and illness anxiety disorder (IAD) are two new diagnoses introduced in the DSM-5. There is a need for reliable instruments to facilitate the assessment of these disorders. We therefore developed a structured diagnostic interview, the Health Preoccupation Diagnostic Interview (HPDI), which we hypothesized would reliably differentiate between SSD, IAD, and no diagnosis. Persons with clinically significant health anxiety (n = 52) and healthy controls (n = 52) were interviewed using the HPDI. Diagnoses were then compared with those made by an independent assessor, who listened to audio recordings of the interviews. Ratings generally indicated moderate to almost perfect inter-rater agreement, as illustrated by an overall Cohen’s κ of .85. Disagreements primarily concerned (a) the severity of somatic symptoms, (b) the differential diagnosis of panic disorder, and (c) SSD specifiers. We conclude that the HPDI can be used to reliably diagnose DSM-5 SSD and IAD.  相似文献   
966.
The main purpose of this study was to analyze the psychometric properties and measurement invariance across gender and age of the Student Stress Inventory-Stress Manifestations (SSI-SM) scores in a large sample of adolescents. The sample was comprised by a total of 1108 students (482 were male), with a mean age of 14.61 years (SD = 1.71). The results indicated that the SSI-SM scores presented adequate psychometric properties from both classical test theory and Item Response Theory (IRT). Confirmatory factorial analysis (CFA), showed that both the bifactor model and a three-factor model (emotional, physiological, and behavioural) were adequate. Multi-group CFA showed that the three-factor model had strong measurement invariance across gender and age. Statistically significant differences in gender were found between latent means as well as raw scores of SSI-SM. Ordinal alpha was .78 for Physiological, .90 for the Emotional, and .79 for the Behavioural subscales. Using IRT, the SSI-SM provides more accuracy information at the medium level of the latent trait. SSI-SM subscales were associated with emotional and behavioural problems. These results provided new sources of validity evidence of the SSI-SM scores in adolescents from general population. The SSI-SM appears to be a useful, brief, and easy to administrate self-report instrument for the screening of stress manifestations at school and educational settings.  相似文献   
967.
Recent studies in alphabetic writing systems have investigated whether the status of letters as consonants or vowels influences the perception and processing of written words. Here, we examined to what extent the organisation of consonants and vowels within words affects performance in a syllable counting task in English. Participants were asked to judge the number of syllables in written words that were matched for the number of spoken syllables but comprised either 1 orthographic vowel cluster less than the number of syllables (hiatus words, e.g., triumph) or as many vowel clusters as syllables (e.g., pudding). In 3 experiments, we found that readers were slower and less accurate on hiatus than control words, even when phonological complexity (Experiment 1), number of reduced vowels (Experiment 2), and number of letters (Experiment 3) were taken into account. Interestingly, for words with or without the same number of vowel clusters and syllables, participants’ errors were more likely to underestimate the number of syllables than to overestimate it. Results are discussed in a cross-linguistic perspective.  相似文献   
968.
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia occurring in 2% of the general population, while the assuming projected incidence in 2050 will rise to 4.3%. This paper presents a multicriteria methodology for the development of a model for monitoring the post‐operative behaviour of patients who have received treatment for AF. The model classifies the patients in seven categories according to their relapse risk, on the basis of seven criteria related to the AF type and pathology conditions, the treatment received by the patients and their medical history. The analysis is based on an extension of the UTilités Additives DIScriminantes (UTADIS) method, through the introduction of a two‐stage model development procedure that minimizes the number and the magnitude of the misclassifications. The analysis is based on a sample of 116 patients who had pulmonary veins isolation in a Greek public hospital. The classification accuracy of the best fitted models scores between 71% and 84%. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
969.
卡方自动交叉检验在人群细分中的应用   总被引:1,自引:0,他引:1  
卡方自动交叉检验(CHi-squaredAutomaticInteractionDetector,CHAID)是一种定性的统计分类技术,主要解决根据一个因变量的不同反应确定若干预测变量的特征问题。这种算法模型可广泛应用于社会调查和市场细分中,根据不同的目的细分人群。该文主要介绍了CHAID算法模型的概念和发展,理论基础和应用过程,以及其效度检验方法,并将其与Logit模型等作了比较,指出其优势和局限。最后提出了研究展望  相似文献   
970.
彭坚  尹奎  侯楠  邹艳春  聂琦 《心理学报》2020,52(9):1105-1120
鉴于当今环境问题的严峻性, 如何激发绿色行为逐渐成为社会各界关注的一个话题。本研究从绿色变革型领导和绿色人力资源管理实践两大绿色管理利器入手, 探究两者能否共同激发员工绿色行为。基于以往文献, 本研究提出两组竞争性假设:基于线索一致理论, 认为绿色变革型领导与绿色人力资源管理实践正向交互影响员工绿色行为; 此外, 基于领导替代理论, 认为绿色变革型领导与绿色人力资源管理实践负向交互影响员工绿色行为。研究1a (N = 91)和研究1b (N = 220)采用实验法, 发现绿色变革型领导与绿色人力资源管理实践发挥协同作用, 正向交互预测员工绿色行为。研究2采用问卷法, 搜集了三时点上下级配对数据(N = 173), 不仅再次支持了研究1的发现, 还进一步揭示环保目标清晰度的中介作用。以上结果支持了线索一致性理论在绿色管理领域的适用性, 并启示企业在绿色管理过程中可以软硬兼施, 联合运用绿色变革型领导和绿色人力资源管理实践两大策略。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号