首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   287篇
  免费   28篇
  国内免费   39篇
  2023年   2篇
  2022年   7篇
  2021年   14篇
  2020年   16篇
  2019年   26篇
  2018年   20篇
  2017年   20篇
  2016年   17篇
  2015年   10篇
  2014年   15篇
  2013年   34篇
  2012年   12篇
  2011年   9篇
  2010年   5篇
  2009年   15篇
  2008年   11篇
  2007年   10篇
  2006年   8篇
  2005年   10篇
  2004年   12篇
  2003年   5篇
  2002年   9篇
  2001年   2篇
  2000年   5篇
  1998年   11篇
  1997年   4篇
  1996年   5篇
  1995年   4篇
  1994年   2篇
  1993年   5篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1980年   3篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
排序方式: 共有354条查询结果,搜索用时 15 毫秒
121.
Significance testing based on p-values is standard in psychological research and teaching. Typically, research articles and textbooks present and use p as a measure of statistical evidence against the null hypothesis (the Fisherian interpretation), although using concepts and tools based on a completely different usage of p as a tool for controlling long-term decision errors (the Neyman-Pearson interpretation). There are four major problems with using p as a measure of evidence and these problems are often overlooked in the domain of psychology. First, p is uniformly distributed under the null hypothesis and can therefore never indicate evidence for the null. Second, p is conditioned solely on the null hypothesis and is therefore unsuited to quantify evidence, because evidence is always relative in the sense of being evidence for or against a hypothesis relative to another hypothesis. Third, p designates probability of obtaining evidence (given the null), rather than strength of evidence. Fourth, p depends on unobserved data and subjective intentions and therefore implies, given the evidential interpretation, that the evidential strength of observed data depends on things that did not happen and subjective intentions. In sum, using p in the Fisherian sense as a measure of statistical evidence is deeply problematic, both statistically and conceptually, while the Neyman-Pearson interpretation is not about evidence at all. In contrast, the likelihood ratio escapes the above problems and is recommended as a tool for psychologists to represent the statistical evidence conveyed by obtained data relative to two hypotheses.  相似文献   
122.
Existing research comparing error management (a strategy focusing on increasing the positive and decreasing the negative consequences of errors) to error prevention (a strategy focusing on working faultlessly), has identified error management as beneficial for multiple outcomes. Yet, due to various methodological limitations, it is unclear whether the effects previously found are due to error prevention, error management, or both. We examine this in an experimental study with a 2 (error prevention: yes vs. no) × 2 (error management: yes vs. no) factorial design. Error prevention had negative effects on cognition and adaptive transfer performance. Error management alleviated worry and boosted one’s perceived self-efficacy. Overall, the results show that error prevention and error management have unique outcomes on negative affect, self-efficacy, cognition, and performance.  相似文献   
123.
魏裕铭 《管子学刊》2010,(2):42-45,50
目前对《荀子》作为著名散文著作之一的研究,从艺术性上的分析止于对其譬喻和大量排比句法的一般肯定。本文认为《荀子》中的幽默风格是应予关注到的艺术特点之一。文中的荒谬意味比喻、错倒描写、对因果不协调的现象原因的欲擒故纵的揭示、"猜谜语"谐隐游戏形式等幽默手法都值得珍视和进一步探索。  相似文献   
124.
If the model for the data are strictly speaking incorrect, then how can one test whether the model fits? Standard goodness-of-fit (GOF) tests rely on strictly correct or incorrect models. But in practice the correct model is not assumed to be available. It would still be of interest to determine how good or how bad the approximation is. But how can this be achieved? If it is determined that a model is a good approximation and hence a good explanation of the data, how can reliable confidence intervals be constructed? In this paper, an attempt is made to answer the above questions. Several GOF tests and methods of constructing confidence intervals are evaluated both in a simulation and with real data from the internet-based daily news memory test.  相似文献   
125.
Adults use both bottom‐up sensory inputs and top‐down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top‐down processing is available early in infancy. However, it is unknown whether this indicates that top‐down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6‐month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants’ predictions without the presentation of new stimuli to control for bottom‐up signals) compared to visual present trials (i.e. trials that confirmed participants’ predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla‐Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.  相似文献   
126.
Individual assessment of infants’ speech discrimination is of great value for studies of language development that seek to relate early and later skills, as well as for clinical work. The present study explored the applicability of the hybrid visual fixation paradigm (Houston et al., 2007) and the associated statistical analysis approach to assess individual discrimination of a native vowel contrast, /aː/ - /eː/, in Dutch 6 to 10-month-old infants. Houston et al. found that 80% (8/10) of the 9-month-old infants successfully discriminated the contrast between pseudowords boodup - seepug. Using the same approach, we found that 12% (14/117) of the infants in our sample discriminated the highly salient /aː/ -/eː/ contrast. This percentage was reduced to 3% (3/117) when we corrected for multiple testing. Bayesian hierarchical modeling indicated that 50% of the infants showed evidence of discrimination. Advantages of Bayesian hierarchical modeling are that 1) there is no need for a correction for multiple testing and 2) better estimates at the individual level are obtained. Thus, individual speech discrimination can be more accurately assessed using state of the art statistical approaches.  相似文献   
127.
In baseball hitting, batters need high precision timing control to hit the ball with bat’s sweet spot. Knowing the acceptable range of timing error for hitting the ball in the aimed direction for various pitch types is helpful to understand whether the cause of the batter's mis-hit is a spatial or temporal error and highlight the motor skills required by the batter. The purpose of this study was to determine the acceptable timing error in different baseball pitches and the impact characteristics of mis-hits. Twenty-six high school baseball players hit a ball launched from a pitching machine with three types of pitches: fastballs, curveballs, and slowballs. We recorded the three-dimensional behavior of the ball, bat, and human body (pelvis) using an optical motion capture system. We then defined the optimal impact location based on timing accuracy, and determined the acceptable range of timing error by the interactive relationship between the horizontal orientation of the bat’s long axis at the time of ball impact and the horizontal direction of the batted ball. The ±30° width in the horizontal direction of the batted ball was set as the precondition for the tolerance of timing. The acceptable timing error was ±7.9 ms for fastballs, ±10.7 ms for curveballs, and ±10.7 ms for slowballs, and the optimal timing for outside pitches was approximately 10 ms later than that for inside pitches. The timing error was also explained 38.1% by variation in the impact location along the long axis of the bat (R2 = 0.381, P < 0.001) and was minimized at a position close to the bat’s sweet spot. These results suggest that the optimal impact location and acceptable range of timing error depend on the pitching course and speed and that timing accuracy is essential to achieve the spatial accuracy required to hit the ball at the bat’s sweet spot.  相似文献   
128.
错误相关负电位(ERN)及其理论解释   总被引:1,自引:0,他引:1  
刘玉丽  张智君 《应用心理学》2008,14(2):180-186,192
错误动作发生后,在个体额叶中部可观察到一个明显的负相电位偏移,这称为“错误相关负电位”(error related negativity,ERN)。有研究者提出了“强化学习理论”,认为ERN反映了当前行为结果与预期之间的差异。与此不同的是“冲突监控理论”,认为ERN与反应冲突有关。还有研究者提出了“失匹配理论”,强调实际反应的神经表征(错误反应)与当前任务所要求的反应表征的差异产生了ERN。在阐述ERN的理论解释基础上,对三者之间的关系进行了分析与整合。  相似文献   
129.
本研究深入考察了我国现阶段小学儿童算术估算能力的发展状况。通过运用自编材料对1027名儿童的团体与个别测试,发现其估算能力的发展具有如下特点:(1)明显受到题目类型的影响,但在不同年龄阶段影响又不尽相同。(2)三年级可能是整数和小数估算能力发展的一个关键期,五年级是发展分数估算能力的较好时期。(3)不同估算策略的发展进程差别相当大,不同时期都有发展的侧重点。(4)小学儿童在估算时易发生多种错误,而且不同年级均有一些典型错误。另外研究还对儿童估算能力、策略与错误类型的发展进行了深入讨论。  相似文献   
130.
韩明秀  贾世伟 《心理科学进展》2016,24(11):1758-1766
错误加工的意识水平包括有意识水平、无意识水平以及介于两者之间的意识水平。意识水平的划分通常通过错误报告范式实现。根据其对意识水平的划分, 可以将相关研究分为二元意识水平研究和多元意识水平研究。已有研究, 主要关注于意识水平与错误诱发的Ne、Pe的关系。两类研究一致表明, Pe受意识水平的调节, 但是并没有得到fMRI研究的一致支持。对Ne的激活是否独立于意识, 研究者们仍存在争议。还有研究考察了注意对错误意识水平的影响, 结果支持Pe与意识水平共变的观点, 但是对注意如何影响错误意识水平还存在质疑。在对以上研究进行整理与分析的基础上, 对将来的研究提出了建议。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号