全文获取类型
收费全文 | 508篇 |
免费 | 49篇 |
国内免费 | 75篇 |
出版年
2023年 | 11篇 |
2022年 | 14篇 |
2021年 | 19篇 |
2020年 | 46篇 |
2019年 | 41篇 |
2018年 | 47篇 |
2017年 | 43篇 |
2016年 | 20篇 |
2015年 | 29篇 |
2014年 | 23篇 |
2013年 | 91篇 |
2012年 | 16篇 |
2011年 | 22篇 |
2010年 | 19篇 |
2009年 | 18篇 |
2008年 | 11篇 |
2007年 | 24篇 |
2006年 | 25篇 |
2005年 | 10篇 |
2004年 | 7篇 |
2003年 | 10篇 |
2002年 | 10篇 |
2001年 | 6篇 |
2000年 | 4篇 |
1999年 | 3篇 |
1998年 | 6篇 |
1997年 | 8篇 |
1996年 | 3篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 5篇 |
1989年 | 2篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1983年 | 8篇 |
1982年 | 1篇 |
1978年 | 3篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有632条查询结果,搜索用时 31 毫秒
151.
ABSTRACT Studies examining visual abilities in individuals with early auditory deprivation have reached mixed conclusions, with some finding congenital auditory deprivation and/or lifelong use of a visuospatial language improves specific visual skills and others failing to find substantial differences. A more consistent finding is enhanced peripheral vision and an increased ability to efficiently distribute attention to the visual periphery following auditory deprivation. However, the extent to which this applies to visual skills in general or to certain conspicuous stimuli, such as faces, in particular is unknown. We examined the perceptual resolution of peripheral vision in the deaf, testing various facial attributes typically associated with high-resolution scrutiny of foveal information processing. We compared performance in face-identification tasks to performance using non-face control stimuli. Although we found no enhanced perceptual representations in face identification, gender categorization, or eye gaze direction recognition tasks, fearful expressions showed greater resilience than happy or neutral ones to increasing eccentricities. In the absence of an alerting sound, the visual system of auditory deprived individuals may develop greater sensitivity to specific conspicuous stimuli as a compensatory mechanism. The results also suggest neural reorganization in the deaf in their opposite advantage of the right visual field in face identification tasks. 相似文献
152.
Body movements, as well as faces, communicate emotions. Research in adults has shown that the perception of action kinematics has a crucial role in understanding others’ emotional experiences. Still, little is known about infants’ sensitivity to body emotional expressions, since most of the research in infancy focused on faces. While there is some first evidence that infants can recognize emotions conveyed in whole‐body postures, it is still an open question whether they can extract emotional information from action kinematics. We measured electromyographic (EMG) activity over the muscles involved in happy (zygomaticus major, ZM), angry (corrugator supercilii, CS) and fearful (frontalis, F) facial expressions, while 11‐month‐old infants observed the same action performed with either happy or angry kinematics. Results demonstrate that infants responded to angry and happy kinematics with matching facial reactions. In particular, ZM activity increased while CS activity decreased in response to happy kinematics and vice versa for angry kinematics. Our results show for the first time that infants can rely on kinematic information to pick up on the emotional content of an action. Thus, from very early in life, action kinematics represent a fundamental and powerful source of information in revealing others’ emotional state. 相似文献
153.
Jonathan Redshaw Mark Nielsen Virginia Slaughter Siobhan Kennedy‐Costantini Janine Oostenbroek Jessica Crimston Thomas Suddendorf 《Developmental science》2020,23(2)
The influential hypothesis that humans imitate from birth – and that this capacity is foundational to social cognition – is currently being challenged from several angles. Most prominently, the largest and most comprehensive longitudinal study of neonatal imitation to date failed to find evidence that neonates copied any of nine actions at any of four time points (Oostenbroek et al., [2016] Current Biology, 26, 1334–1338). The authors of an alternative and statistically liberal post‐hoc analysis of these same data (Meltzoff et al., [2017] Developmental Science, 21, e12609), however, concluded that the infants actually did imitate one of the nine actions: tongue protrusion. In line with the original intentions of this longitudinal study, we here report on whether individual differences in neonatal “imitation” predict later‐developing social cognitive behaviours. We measured a variety of social cognitive behaviours in a subset of the original sample of infants (N = 71) during the first 18 months: object‐directed imitation, joint attention, synchronous imitation and mirror self‐recognition. Results show that, even using the liberal operationalization, individual scores for neonatal “imitation” of tongue protrusion failed to predict any of the later‐developing social cognitive behaviours. The average Spearman correlation was close to zero, mean rs = 0.027, 95% CI [?0.020, 0.075], with all Bonferroni adjusted p values > .999. These results run counter to Meltzoff et al.'s rebuttal, and to the existence of a “like me” mechanism in neonates that is foundational to human social cognition. 相似文献
154.
The success of human culture depends on early emerging mechanisms of social learning, which include the ability to acquire opaque cultural knowledge through faithful imitation, as well as the ability to advance culture through flexible discovery of new means to goal attainment. This study explores whether this mixture of faithful imitation and goal emulation is based in part on individual differences which emerge early in ontogeny. Experimental measurements and parental reports were collected for a group of 2‐year‐old children (N = 48, age = 23–32 months) on their imitative behavior as well as other aspects of cognitive and social development. Results revealed individual differences in children's imitative behavior across trials and tasks which were best characterized by a model that included two behavioral routines; one corresponding to faithful imitation, and one to goal emulation. Moreover, individual differences in faithful imitation and goal emulation were correlated with individual differences in theory of mind, prosocial behavior, and temperament. These findings were discussed in terms of their implications for understanding the mechanisms of social learning, ontogeny of cumulative culture, and the benefit of analyzing individual differences for developmental experiments. 相似文献
155.
Open-ended text questions provide better assessment of learner’s knowledge, but analysing answers for this kind of questions, checking their correctness, and generating of detailed formative feedback about errors for the learner are more difficult and complex tasks than for closed-ended questions like multiple-choice.The analysis of answers for open-ended questions can be performed on different levels. Analysis on character level allows to find errors in characters’ placement inside a word or a token; it is typically used to detect and correct typos, allowing to differ typos from actual errors in the learner’s answer. The word-level or token-level analysis allows finding misplaced, extraneous, or missing words in the sentence. The semantic-level analysis is used to capture formally the meaning of the learner’s answer and compare it with the meaning of the correct answer that can be provided in a natural or formal language. Some systems and approaches use analysis on several levels.The variability of the answers for open-ended questions significantly increases the complexity of the error search and formative feedback generation tasks. Different types of patterns including regular expressions and their use in questions with patterned answers are discussed. The types of formative feedback and modern approaches and their capabilities to generate feedback on different levels are discussed too.Statistical approaches or loosely defined template rules are inclined to false-positive grading. They are generally lowering the workload of creating questions, but provide low feedback. Approaches based on strictly-defined sets of correct answers perform better in providing hinting and answer-until-correct feedback. They are characterised by a higher workload of creating questions because of the need to account for every possible correct answer by the teacher and fewer types of detected errors.The optimal choice for creating automatised e-learning courses are template-based open-ended question systems like OntoPeFeGe, Preg, METEOR, and CorrectWriting which allows answer-until-correct feedback and are able to find and report various types of errors. This approach requires more time to create questions, but less time to manage the learning process in the courses once they are run. 相似文献
156.
157.
时距知觉指对数百毫秒到数个小时时长的知觉, 是日常生活中许多活动的基础。时距知觉受到相当多因素的影响, 如唤醒、注意、动机等。疼痛是一种多维度的心理及生理现象, 包含有感觉分辨、情绪动机、认知评价三个成分。近期研究证明时距知觉会在疼痛背景下发生改变。疼痛背景下时距知觉的相关研究主要涉及三个方面:(1)健康被试对疼痛面孔的时距知觉; (2)实验室诱发疼痛对健康被试时距知觉的影响; (3)临床疼痛患者的时距知觉变化。探索疼痛背景下时距知觉的变化, 可以为理解疼痛的发生发展机制以及时间知觉的机制提供一个新视角。 相似文献
158.
基于框架效应和共情–助人行为假说,以82名大学生为被试,通过实验探讨了网络募捐中求助者的面部表情对捐助意愿的影响,并考察了目标框架的调节作用以及共情的中介作用。结果表明:目标框架调节了求助者的面部表情对捐助者的共情和捐助意愿的影响。在积极框架下,消极面部表情对捐助者的共情和捐助意愿的积极影响显著高于积极面部表情;而在消极框架下,两种面部表情对捐助者的共情和捐助意愿的影响均无显著差异。研究还发现了有中介的调节作用模型,共情在求助者的面部表情对捐助意愿的影响中发挥中介作用,而目标框架是有中介的调节变量。 相似文献
159.
网络慈善众筹是从在线社区获得财力捐助的行为。本研究通过3个实验,考察在线社交平台上受益者面部表情与捐赠者–受益者关系对慈善众筹捐赠行为的影响。结果发现,快乐的面部表情对捐赠金额和分享意愿的影响总体上比悲伤更大;捐赠者–受益者共有的熟人关系比陌生人关系对捐赠金额与分享意愿的影响更大;面部表情与捐赠者–受益者关系对慈善众筹的捐赠金额存在交互作用,但对分享意愿没有显著性影响。结果表明,在线社交平台的慈善众筹捐赠行为更偏好有快乐面部表情的受益者,并且更多地受到间接的、社会交换预期微弱的熟人共有关系的影响。 相似文献
160.
Sonnby-Borgström M 《Scandinavian journal of psychology》2002,43(5):433-443
The hypotheses of this investigation were derived by conceiving of automatic mimicking as a component of emotional empathy. Differences between subjects high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity when subjects were exposed to pictures of angry or happy faces, and the degree of correspondence between subjects' facial EMG reactions and their self-reported feelings. The comparisons were made at different stimulus exposure times in order to elicit reactions at different levels of information processing. The high-empathy subjects were found to have a higher degree of mimicking behavior than the low-empathy subjects, a difference that emerged at short exposure times (17-40 ms) that represented automatic reactions. The low-empathy subjects tended already at short exposure times (17-40 ms) to show inverse zygomaticus muscle reactions, namely "smiling" when exposed to an angry face. The high-empathy group was characterized by a significantly higher correspondence between facial expressions and self-reported feelings. No differences were found between the high- and low-empathy subjects in their verbally reported feelings when presented a happy or an angry face. Thus, the differences between the groups in emotional empathy appeared to be related to differences in automatic somatic reactions to facial stimuli rather than to differences in their conscious interpretation of the emotional situation. 相似文献