首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   255篇
  免费   29篇
  国内免费   63篇
  2023年   9篇
  2022年   11篇
  2021年   16篇
  2020年   28篇
  2019年   24篇
  2018年   32篇
  2017年   27篇
  2016年   15篇
  2015年   23篇
  2014年   18篇
  2013年   35篇
  2012年   12篇
  2011年   12篇
  2010年   8篇
  2009年   3篇
  2008年   5篇
  2007年   11篇
  2006年   7篇
  2005年   5篇
  2004年   3篇
  2003年   5篇
  2002年   6篇
  2001年   1篇
  2000年   3篇
  1998年   2篇
  1997年   7篇
  1996年   1篇
  1995年   2篇
  1994年   1篇
  1992年   1篇
  1991年   2篇
  1990年   2篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   1篇
  1983年   3篇
  1977年   1篇
排序方式: 共有347条查询结果,搜索用时 15 毫秒
81.
ABSTRACT

Studies examining visual abilities in individuals with early auditory deprivation have reached mixed conclusions, with some finding congenital auditory deprivation and/or lifelong use of a visuospatial language improves specific visual skills and others failing to find substantial differences. A more consistent finding is enhanced peripheral vision and an increased ability to efficiently distribute attention to the visual periphery following auditory deprivation. However, the extent to which this applies to visual skills in general or to certain conspicuous stimuli, such as faces, in particular is unknown. We examined the perceptual resolution of peripheral vision in the deaf, testing various facial attributes typically associated with high-resolution scrutiny of foveal information processing. We compared performance in face-identification tasks to performance using non-face control stimuli. Although we found no enhanced perceptual representations in face identification, gender categorization, or eye gaze direction recognition tasks, fearful expressions showed greater resilience than happy or neutral ones to increasing eccentricities. In the absence of an alerting sound, the visual system of auditory deprived individuals may develop greater sensitivity to specific conspicuous stimuli as a compensatory mechanism. The results also suggest neural reorganization in the deaf in their opposite advantage of the right visual field in face identification tasks.  相似文献   
82.
恐惧情绪由于其具有威胁性而优先得到有效的加工。其中空间频率作为处理面孔信息的基础成分, 通过不同的神经通路影响恐惧面孔表情的加工。双通路观点认为在皮层下通路上, 低空间频率的恐惧面孔表情存在优先传递性, 高空间频率则主要通过皮层通路对恐惧面孔表情进行精细化加工; 而多通路则能够更加灵活地处理空间频率对情绪加工的影响。未来研究应明确脑区及其子区域在多条通路上的作用, 从而进一步验证视觉信息是如何影响情绪加工的。  相似文献   
83.
时距知觉指对数百毫秒到数个小时时长的知觉, 是日常生活中许多活动的基础。时距知觉受到相当多因素的影响, 如唤醒、注意、动机等。疼痛是一种多维度的心理及生理现象, 包含有感觉分辨、情绪动机、认知评价三个成分。近期研究证明时距知觉会在疼痛背景下发生改变。疼痛背景下时距知觉的相关研究主要涉及三个方面:(1)健康被试对疼痛面孔的时距知觉; (2)实验室诱发疼痛对健康被试时距知觉的影响; (3)临床疼痛患者的时距知觉变化。探索疼痛背景下时距知觉的变化, 可以为理解疼痛的发生发展机制以及时间知觉的机制提供一个新视角。  相似文献   
84.
基于框架效应和共情–助人行为假说,以82名大学生为被试,通过实验探讨了网络募捐中求助者的面部表情对捐助意愿的影响,并考察了目标框架的调节作用以及共情的中介作用。结果表明:目标框架调节了求助者的面部表情对捐助者的共情和捐助意愿的影响。在积极框架下,消极面部表情对捐助者的共情和捐助意愿的积极影响显著高于积极面部表情;而在消极框架下,两种面部表情对捐助者的共情和捐助意愿的影响均无显著差异。研究还发现了有中介的调节作用模型,共情在求助者的面部表情对捐助意愿的影响中发挥中介作用,而目标框架是有中介的调节变量。  相似文献   
85.
网络慈善众筹是从在线社区获得财力捐助的行为。本研究通过3个实验,考察在线社交平台上受益者面部表情与捐赠者–受益者关系对慈善众筹捐赠行为的影响。结果发现,快乐的面部表情对捐赠金额和分享意愿的影响总体上比悲伤更大;捐赠者–受益者共有的熟人关系比陌生人关系对捐赠金额与分享意愿的影响更大;面部表情与捐赠者–受益者关系对慈善众筹的捐赠金额存在交互作用,但对分享意愿没有显著性影响。结果表明,在线社交平台的慈善众筹捐赠行为更偏好有快乐面部表情的受益者,并且更多地受到间接的、社会交换预期微弱的熟人共有关系的影响。  相似文献   
86.
The hypotheses of this investigation were derived by conceiving of automatic mimicking as a component of emotional empathy. Differences between subjects high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity when subjects were exposed to pictures of angry or happy faces, and the degree of correspondence between subjects' facial EMG reactions and their self-reported feelings. The comparisons were made at different stimulus exposure times in order to elicit reactions at different levels of information processing. The high-empathy subjects were found to have a higher degree of mimicking behavior than the low-empathy subjects, a difference that emerged at short exposure times (17-40 ms) that represented automatic reactions. The low-empathy subjects tended already at short exposure times (17-40 ms) to show inverse zygomaticus muscle reactions, namely "smiling" when exposed to an angry face. The high-empathy group was characterized by a significantly higher correspondence between facial expressions and self-reported feelings. No differences were found between the high- and low-empathy subjects in their verbally reported feelings when presented a happy or an angry face. Thus, the differences between the groups in emotional empathy appeared to be related to differences in automatic somatic reactions to facial stimuli rather than to differences in their conscious interpretation of the emotional situation.  相似文献   
87.
Facial pain is frequently associated with environmental stress and emotional distress. One hypothetical mechanism by which stress is translated into pain is through stress induced motor function (e.g., teeth clenching, grinding, nail biting). Existent data partially supports these stress-hyperactivity models although they have also come under theoretical and empirical attack. The purpose of this study was to examine the relationship between oral behaviors and pain in an analog sample of facial pain sufferers and student controls. Subjects engaged in a controlled clenching task and reported on subjective facial pain intensity and unpleasantness at 5 specified times over the subsequent 48 hours. A one-way ANCOVA indicated group differences in self reported oral habits (p < .05) with the facial pain group reporting great frequency of oral habits. Two repeated measures ANCOVAs (i.e., pain intensity and pain unpleasantness), controlling for baseline pain ratings, indicated a between groups effect with facial pain sufferers experiencing significantly greater pain over the 48 hours postexperiment (p < .05). This study supports a hyperactivity model of facial pain and provides clues about relevant factors in facial pain development.  相似文献   
88.
The aim was to explore whether people high as opposed to low in speech anxiety react with a more pronounced differential facial response when exposed to angry and happy facial stimuli. High and low fear participants were selected based on their scores on a fear of public speaking questionnaire. All participants were exposed to pictures of angry and happy faces while facial electromyographic (EMG) activity from the Corrugator supercilii and the Zygomaticus major muscle regions was recorded. Skin conductance responses (SCR), heart rate (HR) and ratings were also collected. Participants high as opposed to low in speech anxiety displayed a larger differential corrugator responding, indicating a larger negative emotional reaction, between angry and happy faces. They also reacted with a larger differential zygomatic responding, indicating a larger positive emotional reaction, between happy and angry faces. Consistent with the facial reaction patterns, the high fear group rated angry faces as more unpleasant and as expressing more disgust, and further rated happy faces as more pleasant. There were no differences in SCR or HR responding between high and low speech anxiety groups. The present results support the hypothesis that people high in speech anxiety are disposed to show an exaggerated sensitivity and facial responsiveness to social stimuli.  相似文献   
89.
Studies using facial emotional expressions as stimuli partially support the assumption of biased processing of social signals in social phobia. This pilot study explored for the first time whether individuals with social phobia display a processing bias towards emotional prosody. Fifteen individuals with generalized social phobia and fifteen healthy controls (HC) matched for gender, age, and education completed a recognition test consisting of meaningless utterances spoken in a neutral, angry, sad, fearful, disgusted or happy tone of voice. Participants also evaluated the stimuli with regard to valence and arousal. While these ratings did not differ significantly between groups, analysis of the recognition test revealed enhanced identification of sad and fearful voices and decreased identification of happy voices in individuals with social phobia compared with HC. The two groups did not differ in their processing of neutral, disgust, and anger prosody.  相似文献   
90.
Sensitivity to second-order relational information (i.e., spatial relations among features such as the distance between eyes) is a vital part of achieving expertise with face processing. Prior research is unclear on whether infants are sensitive to second-order differences seen in typical human populations. In the current experiments, we examined whether infants are sensitive to changes in the space between the eyes and between the nose and the mouth that are within the normal range of variability in Caucasian female faces. In Experiment 1, 7-month-olds detected these changes in second-order relational information. Experiment 2 extended this finding to 5-month-olds and also found that infants detect second-order relations in upright faces but not in inverted faces, thereby exhibiting an inversion effect that has been considered to be a hallmark of second-order relational processing during adulthood. These results suggest that infants as young as 5 months are sensitive to second-order relational changes that are within the normal range of human variability. They also indicate that at least rudimentary aspects of face processing expertise are available early in life.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号