首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   551篇
  免费   36篇
  国内免费   73篇
  2023年   10篇
  2022年   12篇
  2021年   25篇
  2020年   37篇
  2019年   50篇
  2018年   49篇
  2017年   44篇
  2016年   25篇
  2015年   31篇
  2014年   23篇
  2013年   105篇
  2012年   18篇
  2011年   21篇
  2010年   16篇
  2009年   32篇
  2008年   25篇
  2007年   25篇
  2006年   20篇
  2005年   17篇
  2004年   7篇
  2003年   9篇
  2002年   9篇
  2001年   4篇
  2000年   5篇
  1999年   1篇
  1998年   3篇
  1997年   7篇
  1996年   3篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   3篇
  1990年   3篇
  1987年   1篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1983年   3篇
  1977年   1篇
排序方式: 共有660条查询结果,搜索用时 15 毫秒
141.
142.
Self-reported experiences are often poor indicators of outward expressions. Here we examine social power as a variable that may impact the relationship between self-reported affect and facial expressions. Earlier studies addressing this issue were limited by focusing on a single facial expression (smiling) and by using different, less sensitive methods that yielded mostly null results. Sampling, for the first time, self-reported affect repeatedly in response to different negative, neutral and positive stimuli, and measuring concurrent facial muscle activation via electromyography, we found that high power (vs. baseline) increased the correspondence between self-reported positive affect and smiling. There was also an indication that high power (vs. baseline) bolstered the association between self-reported negative affect and frowning but the effect did not pass more stringent criteria for significance (p ≤ .005) and was therefore deemed inconclusive. The prediction that low power (vs. baseline) decreases the correspondence between self-reported affect and smiling and frowning facial expressions was not supported. Taken together, it would appear that (high) power can impact the relationship between self-reported affect and facial expressions, but it remains to be seen whether this effect extends beyond smiling facial expressions.  相似文献   
143.
ABSTRACT

Studies examining visual abilities in individuals with early auditory deprivation have reached mixed conclusions, with some finding congenital auditory deprivation and/or lifelong use of a visuospatial language improves specific visual skills and others failing to find substantial differences. A more consistent finding is enhanced peripheral vision and an increased ability to efficiently distribute attention to the visual periphery following auditory deprivation. However, the extent to which this applies to visual skills in general or to certain conspicuous stimuli, such as faces, in particular is unknown. We examined the perceptual resolution of peripheral vision in the deaf, testing various facial attributes typically associated with high-resolution scrutiny of foveal information processing. We compared performance in face-identification tasks to performance using non-face control stimuli. Although we found no enhanced perceptual representations in face identification, gender categorization, or eye gaze direction recognition tasks, fearful expressions showed greater resilience than happy or neutral ones to increasing eccentricities. In the absence of an alerting sound, the visual system of auditory deprived individuals may develop greater sensitivity to specific conspicuous stimuli as a compensatory mechanism. The results also suggest neural reorganization in the deaf in their opposite advantage of the right visual field in face identification tasks.  相似文献   
144.
Body movements, as well as faces, communicate emotions. Research in adults has shown that the perception of action kinematics has a crucial role in understanding others’ emotional experiences. Still, little is known about infants’ sensitivity to body emotional expressions, since most of the research in infancy focused on faces. While there is some first evidence that infants can recognize emotions conveyed in whole‐body postures, it is still an open question whether they can extract emotional information from action kinematics. We measured electromyographic (EMG) activity over the muscles involved in happy (zygomaticus major, ZM), angry (corrugator supercilii, CS) and fearful (frontalis, F) facial expressions, while 11‐month‐old infants observed the same action performed with either happy or angry kinematics. Results demonstrate that infants responded to angry and happy kinematics with matching facial reactions. In particular, ZM activity increased while CS activity decreased in response to happy kinematics and vice versa for angry kinematics. Our results show for the first time that infants can rely on kinematic information to pick up on the emotional content of an action. Thus, from very early in life, action kinematics represent a fundamental and powerful source of information in revealing others’ emotional state.  相似文献   
145.
Open-ended text questions provide better assessment of learner’s knowledge, but analysing answers for this kind of questions, checking their correctness, and generating of detailed formative feedback about errors for the learner are more difficult and complex tasks than for closed-ended questions like multiple-choice.The analysis of answers for open-ended questions can be performed on different levels. Analysis on character level allows to find errors in characters’ placement inside a word or a token; it is typically used to detect and correct typos, allowing to differ typos from actual errors in the learner’s answer. The word-level or token-level analysis allows finding misplaced, extraneous, or missing words in the sentence. The semantic-level analysis is used to capture formally the meaning of the learner’s answer and compare it with the meaning of the correct answer that can be provided in a natural or formal language. Some systems and approaches use analysis on several levels.The variability of the answers for open-ended questions significantly increases the complexity of the error search and formative feedback generation tasks. Different types of patterns including regular expressions and their use in questions with patterned answers are discussed. The types of formative feedback and modern approaches and their capabilities to generate feedback on different levels are discussed too.Statistical approaches or loosely defined template rules are inclined to false-positive grading. They are generally lowering the workload of creating questions, but provide low feedback. Approaches based on strictly-defined sets of correct answers perform better in providing hinting and answer-until-correct feedback. They are characterised by a higher workload of creating questions because of the need to account for every possible correct answer by the teacher and fewer types of detected errors.The optimal choice for creating automatised e-learning courses are template-based open-ended question systems like OntoPeFeGe, Preg, METEOR, and CorrectWriting which allows answer-until-correct feedback and are able to find and report various types of errors. This approach requires more time to create questions, but less time to manage the learning process in the courses once they are run.  相似文献   
146.
恐惧情绪由于其具有威胁性而优先得到有效的加工。其中空间频率作为处理面孔信息的基础成分, 通过不同的神经通路影响恐惧面孔表情的加工。双通路观点认为在皮层下通路上, 低空间频率的恐惧面孔表情存在优先传递性, 高空间频率则主要通过皮层通路对恐惧面孔表情进行精细化加工; 而多通路则能够更加灵活地处理空间频率对情绪加工的影响。未来研究应明确脑区及其子区域在多条通路上的作用, 从而进一步验证视觉信息是如何影响情绪加工的。  相似文献   
147.
时距知觉指对数百毫秒到数个小时时长的知觉, 是日常生活中许多活动的基础。时距知觉受到相当多因素的影响, 如唤醒、注意、动机等。疼痛是一种多维度的心理及生理现象, 包含有感觉分辨、情绪动机、认知评价三个成分。近期研究证明时距知觉会在疼痛背景下发生改变。疼痛背景下时距知觉的相关研究主要涉及三个方面:(1)健康被试对疼痛面孔的时距知觉; (2)实验室诱发疼痛对健康被试时距知觉的影响; (3)临床疼痛患者的时距知觉变化。探索疼痛背景下时距知觉的变化, 可以为理解疼痛的发生发展机制以及时间知觉的机制提供一个新视角。  相似文献   
148.
基于框架效应和共情–助人行为假说,以82名大学生为被试,通过实验探讨了网络募捐中求助者的面部表情对捐助意愿的影响,并考察了目标框架的调节作用以及共情的中介作用。结果表明:目标框架调节了求助者的面部表情对捐助者的共情和捐助意愿的影响。在积极框架下,消极面部表情对捐助者的共情和捐助意愿的积极影响显著高于积极面部表情;而在消极框架下,两种面部表情对捐助者的共情和捐助意愿的影响均无显著差异。研究还发现了有中介的调节作用模型,共情在求助者的面部表情对捐助意愿的影响中发挥中介作用,而目标框架是有中介的调节变量。  相似文献   
149.
网络慈善众筹是从在线社区获得财力捐助的行为。本研究通过3个实验,考察在线社交平台上受益者面部表情与捐赠者–受益者关系对慈善众筹捐赠行为的影响。结果发现,快乐的面部表情对捐赠金额和分享意愿的影响总体上比悲伤更大;捐赠者–受益者共有的熟人关系比陌生人关系对捐赠金额与分享意愿的影响更大;面部表情与捐赠者–受益者关系对慈善众筹的捐赠金额存在交互作用,但对分享意愿没有显著性影响。结果表明,在线社交平台的慈善众筹捐赠行为更偏好有快乐面部表情的受益者,并且更多地受到间接的、社会交换预期微弱的熟人共有关系的影响。  相似文献   
150.
Barenholtz E  Feldman J 《Cognition》2006,101(3):530-544
Figure/ground assignment - determining which part of the visual image is foreground and which background - is a critical step in early visual analysis, upon which much later processing depends. Previous research on the assignment of figure and ground to opposing sides of a contour has almost exclusively involved static geometric factors - such as convexity, symmetry, and size - in non-moving images. Here, we introduce a new class of cue to figural assignment based on the motion of dynamically deforming contours. Subjects viewing an animated, deforming shape tended to assign figure and ground so that articulating curvature extrema - i.e., "hinging" vertices - had negative (concave) contour curvature. This articulating-concavity bias is present when all known static cues to figure/ground are absent or neutral in each of the individual frames of the animation, and even seems to override a number of well-known static cues when they are in opposition to the motion cue. We propose that the phenomenon reflects the visual system's inbuilt expectations about the way shapes will deform - specifically, that deformations tend to involve rigid parts articulating at concavities.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号