首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   682篇
  免费   54篇
  国内免费   42篇
  778篇
  2024年   5篇
  2023年   29篇
  2022年   33篇
  2021年   28篇
  2020年   45篇
  2019年   41篇
  2018年   33篇
  2017年   40篇
  2016年   42篇
  2015年   14篇
  2014年   23篇
  2013年   65篇
  2012年   13篇
  2011年   12篇
  2010年   8篇
  2009年   17篇
  2008年   17篇
  2007年   20篇
  2006年   22篇
  2005年   24篇
  2004年   16篇
  2003年   14篇
  2002年   20篇
  2001年   9篇
  2000年   11篇
  1999年   6篇
  1998年   8篇
  1997年   6篇
  1996年   7篇
  1995年   16篇
  1994年   3篇
  1993年   8篇
  1992年   7篇
  1991年   8篇
  1990年   6篇
  1989年   12篇
  1988年   9篇
  1987年   5篇
  1986年   5篇
  1985年   4篇
  1984年   4篇
  1983年   6篇
  1982年   6篇
  1981年   3篇
  1980年   4篇
  1979年   15篇
  1978年   9篇
  1977年   8篇
  1976年   7篇
  1974年   2篇
排序方式: 共有778条查询结果,搜索用时 9 毫秒
741.
The purpose of the present experiment was to investigate the role of auditory feedback and its impact on movement time in a standard Fitts task. Feedback was given at the moment of target acquisition. A 2-way analysis of variance found significant differences between feedback groups at all three indexes of difficulty (F(2, 40) = 156.02, p < .001). Results from a mixed-model multivariate analysis of variance for kinematic factors show significant differences in peak velocity and the location of peak velocity when comparing feedback groups. In general, the addition of auditory feedback decreased the task ID by .5.  相似文献   
742.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant’s personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.  相似文献   
743.
Sequential multiple assignment randomized trials (SMARTs) are a useful and increasingly popular approach for gathering information to inform the construction of adaptive interventions to treat psychological and behavioral health conditions. Until recently, analysis methods for data from SMART designs considered only a single measurement of the outcome of interest when comparing the efficacy of adaptive interventions. Lu et al. proposed a method for considering repeated outcome measurements to incorporate information about the longitudinal trajectory of change. While their proposed method can be applied to many kinds of outcome variables, they focused mainly on linear models for normally distributed outcomes. Practical guidelines and extensions are required to implement this methodology with other types of repeated outcome measures common in behavioral research. In this article, we discuss implementation of this method with repeated binary outcomes. We explain how to compare adaptive interventions in terms of various summaries of repeated binary outcome measures, including average outcome (area under the curve) and delayed effects. The method is illustrated using an empirical example from a SMART study to develop an adaptive intervention for engaging alcohol- and cocaine-dependent patients in treatment. Monte Carlo simulations are provided to demonstrate the good performance of the proposed technique.  相似文献   
744.
Many researchers face the problem of missing data in longitudinal research. Especially, high risk samples are characterized by missing data which can complicate analyses and the interpretation of results. In the current study, our aim was to find the most optimal and best method to deal with the missing data in a specific study with many missing data on the outcome variable. Therefore, different techniques to handle missing data were evaluated, and a solution to efficiently handle substantial amounts of missing data was provided. A simulation study was conducted to determine the most optimal method to deal with the missing data. Results revealed that multiple imputation (MI) using predictive mean matching was the most optimal method with respect to lowest bias and the smallest confidence interval (CI) while maintaining power. Listwise deletion and last observation carried backward also scored acceptable with respect to bias; however, CIs were much larger and sample size almost halved using these methods. Longitudinal research in high risk samples could benefit from using MI in future research to handle missing data. The paper ends with a checklist for handling missing data.  相似文献   
745.
Summary

This article presents an overview of the history of efforts to protect human subjects in research. It discusses the establishment of international, national, organizational, and institutional procedures designed to protect human participants. The article provides a detailed summary of the principal ethical codes for research and their origins. It includes discussions of the most frequently encountered ethical issues ranging from the initial decision to undertake the project, through the selection and application of the various research procedures, to the analysis and interpretation of the data.  相似文献   
746.
This research explores the role of three intercultural personality traits—emotional stability, social initiative, and open-mindedness—as coping resources for expatriate couples’ adjustment. First, we examined the direct relationships of expatriates’ and expatriate spouses’ personality trait levels with psychological and sociocultural adjustment. Psychological adjustment refers to internal psychological outcomes such as mental health and personal satisfaction, whereas sociocultural adjustment refers to more externally oriented psychological outcomes that link the individual to the new environment. Second, we examined the association of expatriates’ personality trait levels with professional adjustment, which was defined in terms of job performance and organizational commitment. Cross-sectional analyses among 196 expatriates and expatriate spouses (i.e., 98 expatriate couples) revealed that the three dimensions are each associated with specific facets of adjustment. A longitudinal analysis among a subsample (45 couples) partially confirmed these findings. Furthermore, we obtained evidence for a resource compensation effect, that is, the compensatory process whereby one partner's lack of sufficiently high levels of a certain personality trait is compensated for by the other partner's high(er) levels of this traits. Through this resource compensation effect, the negative consequences of a lack of sufficient levels of a personality trait on adjustment can be diminished. Apparently, in the absence of sufficiently high trait levels, individuals can benefit from personality resources in their partners.  相似文献   
747.
ABSTRACT

This study presents Danish data for the Symbol Digit Modalities Test (SDMT), Color Trails Test (CTT), and a modified Stroop test from 100 subjects aged 60–87 years. Among the included demographic variables, age had the highest impact on test performances. Thus, the study presents separate data for different age groups. For SDMT and CTT1, Danish Adult Reading Test (DART) score also had a significant impact on test performances. The incongruent version of the modified Stroop test was significantly correlated to education. Moderate and significant correlations were found between the three tests. Even though the three tests are commonly used, few normative data for elderly exists. SDMT and CTT performances from this study were in the same range as previously published international norms, but the validity of the result from the modified Stroop test could not be investigated.  相似文献   
748.
The Fuld Object Memory Evaluation (FOME) has considerable utility for cognitive assessment in older adults, but there are few normative data, particularly for the oldest old. In this study, 80 octogenarians and 244 centenarians from the Georgia Centenarian Study completed the FOME. Total and trial-to-trial performance on the storage, retrieval, repeated retrieval, and ineffective reminder indices were assessed. Additional data stratified by age group, education, and cognitive impairment are provided in the Supplemental data. Octogenarians performed significantly better than centenarians on all FOME measures. Neither age group benefitted from additional learning trials beyond Trial 3 for storage and Trial 2 for retention and retrieval. Ineffective reminders showed no change across learning trials for octogenarians, while centenarians improved only between Trials 1 and 2. This minimal improvement past Trial 2 indicates that older adults might benefit from a truncated version of the test that does not include trials three through five, with the added benefit of reducing testing burden in this population.  相似文献   
749.
A main thread of the debate over mathematical realism has come down to whether mathematics does explanatory work of its own in some of our best scientific explanations of empirical facts. Realists argue that it does; anti-realists argue that it doesn't. Part of this debate depends on how mathematics might be able to do explanatory work in an explanation. Everyone agrees that it's not enough that there merely be some mathematics in the explanation. Anti-realists claim there is nothing mathematics can do to make an explanation mathematical; realists think something can be done, but they are not clear about what that something is.

I argue that many of the examples of mathematical explanations of empirical facts in the literature can be accounted for in terms of Jackson and Pettit's [1990] notion of program explanation, and that mathematical realists can use the notion of program explanation to support their realism. This is exactly what has happened in a recent thread of the debate over moral realism (in this journal). I explain how the two debates are analogous and how moves that have been made in the moral realism debate can be made in the mathematical realism debate. However, I conclude that one can be a mathematical realist without having to be a moral realist.  相似文献   
750.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号