首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   826篇
  免费   76篇
  国内免费   106篇
  1008篇
  2024年   4篇
  2023年   15篇
  2022年   17篇
  2021年   30篇
  2020年   51篇
  2019年   51篇
  2018年   44篇
  2017年   31篇
  2016年   38篇
  2015年   38篇
  2014年   23篇
  2013年   117篇
  2012年   38篇
  2011年   23篇
  2010年   21篇
  2009年   30篇
  2008年   41篇
  2007年   32篇
  2006年   39篇
  2005年   38篇
  2004年   28篇
  2003年   24篇
  2002年   26篇
  2001年   22篇
  2000年   19篇
  1999年   14篇
  1998年   20篇
  1997年   12篇
  1996年   15篇
  1995年   15篇
  1994年   13篇
  1993年   11篇
  1992年   3篇
  1991年   6篇
  1990年   5篇
  1989年   6篇
  1988年   2篇
  1987年   7篇
  1986年   5篇
  1985年   3篇
  1984年   3篇
  1982年   6篇
  1981年   6篇
  1980年   3篇
  1979年   6篇
  1978年   4篇
  1977年   3篇
排序方式: 共有1008条查询结果,搜索用时 15 毫秒
131.
该研究以山东省610名高中生为被试,检验了中文版一般自我效能量表(GSES)的信度和效度.结果表明:(1)中文版GSES的有些项目,区分度不高;(2)中文版GSES具有较高的内部一致性信度与分半信度,但重测信度不高;(3)中文版GSES的单维度性没有得到证实;(4)中文版GSES不具有很好的预测效度.  相似文献   
132.
《中国大学生人格量表》的编制*   总被引:25,自引:0,他引:25  
在建立中国人人格大七结构模型以及成熟的人格量表基础上,本研究通过实证研究建立了适用于大学生群体的人格量表。统计结果表明,由68个项目、7个维度构成的中国大学生人格量表(CCSPS)与中国人人格的7因素模型有着良好的拟合,而且信度和效度检验均符合心理测量学要求;此外,在55098名被试上建立了CC-SPS的常模。研究结果表明CCSPS针对中国大学生群体可靠的人格测量工具。  相似文献   
133.
When multisource feedback instruments, for example, 360-degree feedback tools, are validated, multilevel structural equation models are the method of choice to quantify the amount of reliability as well as convergent and discriminant validity. A non-standard multilevel structural equation model that incorporates self-ratings (level-2 variables) and others’ ratings from different additional perspectives (level-1 variables), for example, peers and subordinates, has recently been presented. In a Monte Carlo simulation study, we determine the minimal required sample sizes for this model. Model parameters are accurately estimated even with the smallest simulated sample size of 100 self-ratings and two ratings of peers and of subordinates. The precise estimation of standard errors necessitates sample sizes of 400 self-ratings or at least four ratings of peers and subordinates. However, if sample sizes are smaller, mainly standard errors concerning common method factors are biased. Interestingly, there are trade-off effects between the sample sizes of self-ratings and others’ ratings in their effect on estimation bias. The degree of convergent and discriminant validity has no effect on the accuracy of model estimates. The χ2 test statistic does not follow the expected distribution. Therefore, we suggest using a corrected level-specific standardized root mean square residual to analyse model fit and conclude with further recommendations for applied organizational research.  相似文献   
134.
Debates about the utility of p values and correct ways to analyze data have inspired new guidelines on statistical inference by the American Psychological Association (APA) and changes in the way results are reported in other scientific journals, but their impact on the Journal of the Experimental Analysis of Behavior (JEAB) has not previously been evaluated. A content analysis of empirical articles published in JEAB between 1992 and 2017 investigated whether statistical and graphing practices changed during that time period. The likelihood that a JEAB article reported a null hypothesis significance test, included a confidence interval, or depicted at least one figure with error bars has increased over time. Features of graphs in JEAB, including the proportion depicting single‐subject data, have not changed systematically during the same period. Statistics and graphing trends in JEAB largely paralleled those in mainstream psychology journals, but there was no evidence that changes to APA style had any direct impact on JEAB. In the future, the onus will continue to be on authors, reviewers and editors to ensure that statistical and graphing practices in JEAB continue to evolve without interfering with characteristics that set the journal apart from other scientific journals.  相似文献   
135.
In modern validity theory, a major concern is the construct validity of a test, which is commonly assessed through confirmatory or exploratory factor analysis. In the framework of Bayesian exploratory Multidimensional Item Response Theory (MIRT) models, we discuss two methods aimed at investigating the underlying structure of a test, in order to verify if the latent model adheres to a chosen simple factorial structure. This purpose is achieved without imposing hard constraints on the discrimination parameter matrix to address the rotational indeterminacy. The first approach prescribes a 2-step procedure. The parameter estimates are obtained through an unconstrained MCMC sampler. The simple structure is, then, inspected with a post-processing step based on the Consensus Simple Target Rotation technique. In the second approach, both rotational invariance and simple structure retrieval are addressed within the MCMC sampling scheme, by introducing a sparsity-inducing prior on the discrimination parameters. Through simulation as well as real-world studies, we demonstrate that the proposed methods are able to correctly infer the underlying sparse structure and to retrieve interpretable solutions.  相似文献   
136.
The present study introduces the Verbal Associated Pairs Screen (VAPS) as a new measure for assessing performance validity in pediatric populations. This study presents initial data on psychometric properties and establishes construct validity for the VAPS in a sample of 30 adolescent healthy controls and 206 youths with traumatic brain injury (TBI: moderate/severe, N = 30; mild, N = 176). The control group’s age (M = 14.93, SD = 1.8) was significantly higher than the moderate/severe TBI (M = 13.9, SD = 2.8), t(68.508) = ?3.038, p = .003, and mild TBI (mTBI) groups (M = 14, SD = 2.8), t(54.147) = 2.038, p = .046. The TBI groups were administered the VAPS in accord with other established performance validity tests (PVTs) and well-established memory tests as part of routine clinical evaluations. The healthy control group was administered the VAPS only. VAPS score distributions for the control group were negatively skewed and highly kurtotic. VAPS scores from the moderate/severe TBI and control groups were indistinguishable for Trial 2 (U = 274, p < .01) and the Delay (U = 396, p = .218). In the mTBI group, convergent and divergent validity was established with other well-validated PVTs and memory tests, respectively. ROC curve analyses identified optimal cutoff scores for the VAPS Total Score, with acceptable sensitivity (55%) and excellent specificity (100%), as well as strong detectability (AUC = .829, 95% CI: 0.731 – 0.928, p < .001). Clinical applications, limitations, and directions for future research with the VAPS are discussed.  相似文献   
137.
The most common method used to evaluate child behavior and functioning is rating scales completed by parents and/or teachers. Given that executive functioning (EF) plays a fundamental role in the developing child’s cognitive, behavioral, and social-emotional development, it would be ideal if ratings of EF and performance-based EF measures assess the same construct. However, most studies report a small to negligible association between performance-based measures and ratings of EF. There are few studies investigating this association for preschoolers, and most only include parent ratings. Teachers may be more reliable reporters of EF behaviors due to the higher demand for EF skills in the preschool setting than at home and because teachers may have a better sense of what behaviors are normative. In this study, we reviewed the associations between three EF rating scales completed by teachers on 243 preschool children. Results showed small to moderate correlations with EF measures of inhibition and cognitive flexibility/switching for all three scales, with the strongest associations observed between Child Behavior Rating Scale (CBRS) Behavioral Regulation subscale and child EF measures. Exploratory multivariate path analyses showed that, after controlling for age, sex, and socioeconomic status (SES), Behavioral Regulation significantly predicted performance-based measures of EF and accounted for incrementally more variance in the models. We conclude that in ideal situations, it is best to measure EF using both rating scales and performance-based measures of EF. The CBRS seems to be a sensitive measure of EF in preschoolers and may be a helpful brief screening tool for use with teachers.  相似文献   
138.
In bilingual language environments, infants and toddlers listen to two separate languages during the same key years that monolingual children listen to just one and bilinguals rarely learn each of their two languages at the same rate. Learning to understand language requires them to cope with challenges not found in monolingual input, notably the use of two languages within the same utterance (e.g., Do you like the perro? or ¿Te gusta el doggy?). For bilinguals of all ages, switching between two languages can reduce the efficiency in real‐time language processing. But language switching is a dynamic phenomenon in bilingual environments, presenting the young learner with many junctures where comprehension can be derailed or even supported. In this study, we tested 20 Spanish–English bilingual toddlers (18‐ to 30‐months) who varied substantially in language dominance. Toddlers’ eye movements were monitored as they looked at familiar objects and listened to single‐language and mixed‐language sentences in both of their languages. We found asymmetrical switch costs when toddlers were tested in their dominant versus non‐dominant language, and critically, they benefited from hearing nouns produced in their dominant language, independent of switching. While bilingualism does present unique challenges, our results suggest a united picture of early monolingual and bilingual learning. Just like monolinguals, experience shapes bilingual toddlers’ word knowledge, and with more robust representations, toddlers are better able to recognize words in diverse sentences.  相似文献   
139.
IntroductionThe Autism Spectrum Questionnaire (AQ, Baron-Cohen et al., 2001) is a self-report assessment tool aiming at screening autistic traits in normal intelligence adults. While numerous versions in other languages than English now exist, few factorial evidence do sustain the valid use of this instrument as it was conceived, based upon five distinct dimensions (Social skills, Communication, Attention to detail, Attention switching, Imagination); no such study exists with a French version of the AQ. The aim of our study is therefore to present the French version of the scale and to explore its factorial validity with confirmatory factorial analyses and, possibly, its invariance across men and women.MethodSeveral confirmatory factorial analyses, with the robust WLSMV estimator for categorical response format, were run on the questionnaire data from 788 French-speaking students (17–25 years old) at university faculties or schools for higher education in Belgium. The original five-factor measurement model of the AQ was assessed as well as alternative models. An exploratory factorial analysis was also applied to get more insight as to possible sources of misfit.ResultsNo measurement model – neither the original five-factor one nor any of the six other models tested – did produce statistics or fit indices close to significant values: there was no fit to the data. The internal consistency of the subscales was weak; the exploratory factorial analysis further confirmed that as much as ten factors were needed to explain 45% of the data variance.ConclusionOur results, with a French version of the scale, add to many other ones which suggest that the AQ is a too heterogeneous questionnaire with somewhat ill-defined dimensions and non specific/ambiguous items. The questionnaire should probably be shortened and its content realigned to core features of the autism spectrum.  相似文献   
140.
When examinees' test-taking motivation is questionable, practitioners must determine whether careless responding is of practical concern and if so, decide on the best approach to filter such responses. As there has been insufficient research on these topics, the objectives of this study were to: a) evaluate the degree of underestimation in the true mean when careless responses are present, and b) compare the effectiveness of two filtering procedures in purifying biased aggregated-scores. Results demonstrated that: a) the true mean was underestimated by around 0.20 SDs if the total amount of careless responses exceeded 6.25%, 12.5%, and 12.5% for easy, moderately difficult, and difficult tests, respectively, and b) listwise deleting data from unmotivated examinees artificially inflated the true mean by as much as .42 SDs when ability was related to careless responding. Findings from this study have implications for when and how practitioners should handle careless responses for group-based low-stakes assessments.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号