首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   229篇
  免费   15篇
  国内免费   15篇
  259篇
  2024年   1篇
  2023年   3篇
  2022年   4篇
  2021年   5篇
  2020年   14篇
  2019年   18篇
  2018年   27篇
  2017年   22篇
  2016年   2篇
  2015年   10篇
  2014年   6篇
  2013年   59篇
  2012年   5篇
  2011年   3篇
  2010年   9篇
  2009年   9篇
  2008年   4篇
  2007年   14篇
  2006年   6篇
  2005年   6篇
  2004年   1篇
  2003年   6篇
  2002年   2篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   3篇
  1997年   3篇
  1996年   2篇
  1994年   1篇
  1993年   2篇
  1990年   2篇
  1989年   1篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
排序方式: 共有259条查询结果,搜索用时 0 毫秒
251.
When are current generations held accountable for transgressions committed by previous generations? In two studies, we test the prediction that current generations will only be assigned guilt for past atrocities when victim group members perceive high levels of cultural continuity between historical perpetrators and the current generation within the perpetrator group. Japanese participants were presented with information describing the current generation of Americans as either similar or dissimilar in personality to the Americans who were implicated in dropping the atomic bomb on Japan during World War II. The results of both studies revealed that victim group members assigned more guilt to current Americans when they perceived high (compared to low) outgroup continuity, and they did so relatively independently of the transgressor group's guilt expressions.  相似文献   
252.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   
253.
具身-生成取向正在引领当代认知科学加速发展,然而学术界在理解“具身-生成”的内涵上却正陷入“战国时代”,这严重影响了该取向的理论效力.通过分析来自交叉学科的相关证据,发现“具身”与“生成”既有联系,又有区别,两者相互支撑.具身认知主要考察嵌入在社会情境中的身体结构、活动、内容与形式对认知活动的影响.“生成”强调认知结构在大脑、身体与环境的结构性耦合中涌现出来的动态机制,尤其是知觉-行动环路的作用.未来,具身-生成的认知科学只有在应对来自经典认知科学的挑战过程中才有可能走出“战国时代”.  相似文献   
254.
关于不同情绪是否对应不同的生理反应一直存在争议。Nummenmma等人(2014)使用自创的emBODY工具,发现每种情绪都有其独特的身体感觉地图(BSMs)。本研究以中国大学生为被试,以emBODY为研究工具绘制快乐、爱、恐惧、焦虑四种情绪的BSMs,并要求被试口头报告BSMs所反映的身体感觉。结果发现,四种情绪具有不同的身体感觉,体现为BSMs的差异与主观报告的差异。质性资料分析发现BSMs所反映的身体感觉不仅包括生理反应、也包括认知、感受和行为倾向,这为情绪与身体的关系提供了新的证据。被试对身体部位活动性增强或减弱的理解不一致、只呈现身体的一面等是BSMs作为研究工具的潜在局限,未来研究需要做出改进。  相似文献   
255.
为检验语境信息在面部表情加工和识别中的作用,通过两个实验考察语境信息的情绪性和自我相关性对中性以及不同强度恐惧面孔情绪加工的影响。结果发现,积极语境下中性情绪面孔效价的评分更高,自我相关语境下中性面孔唤醒度的评分更高;消极语境下面孔的恐惧情绪更容易被察觉。因此,面部表情加工中的语境效应表现为对中性情绪面孔的情绪诱发和增强作用,以及在情绪一致情况下对不同情绪强度面孔判断的促进作用。  相似文献   
256.
Expression recognition and behavioural problems in early adolescence   总被引:1,自引:0,他引:1  
The processing of emotional expressions is fundamental for normal socialisation and social interaction. Fifty-five children (aged 11–14 years) in mainstream education participated in this study. They were presented with a standardised set of pictures of facial expressions and asked to name one of the six emotions illustrated (sadness, happiness, anger, disgust, fear, and surprise). Following experimental testing, their behaviour was rated by two independent teachers on the Psychopathy Screening Device (PSD). The PSD assesses two dimensions of behavioral problems: affective-interpersonal disturbance and impulsive behaviour/conduct problems. The results showed that the ability to recognise sad and fearful expressions (but not happy, angry, disgusted, or surprised expressions) was inversely related to both level of affective-interpersonal disturbance and impulsive/conduct problems. These results are interpreted with reference to current models of empathy and its disorders.  相似文献   
257.
Facial stimuli are widely used in behavioural and brain science research to investigate emotional facial processing. However, some studies have demonstrated that dynamic expressions elicit stronger emotional responses compared to static images. To address the need for more ecologically valid and powerful facial emotional stimuli, we created Dynamic FACES, a database of morphed videos (n?=?1026) from younger, middle-aged, and older adults displaying naturalistic emotional facial expressions (neutrality, sadness, disgust, fear, anger, happiness). To assess adult age differences in emotion identification of dynamic stimuli and to provide normative ratings for this modified set of stimuli, healthy adults (n?=?1822, age range 18–86 years) categorised for each video the emotional expression displayed, rated the expression distinctiveness, estimated the age of the face model, and rated the naturalness of the expression. We found few age differences in emotion identification when using dynamic stimuli. Only for angry faces did older adults show lower levels of identification accuracy than younger adults. Further, older adults outperformed middle-aged adults’ in identification of sadness. The use of dynamic facial emotional stimuli has previously been limited, but Dynamic FACES provides a large database of high-resolution naturalistic, dynamic expressions across adulthood. Information on using Dynamic FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   
258.
Children are often surrounded by other humans and companion animals (e.g., dogs, cats); and understanding facial expressions in all these social partners may be critical to successful social interactions. In an eye-tracking study, we examined how children (4–10 years old) view and label facial expressions in adult humans and dogs. We found that children looked more at dogs than humans, and more at negative than positive or neutral human expressions. Their viewing patterns (Proportion of Viewing Time, PVT) at individual facial regions were also modified by the viewed species and emotion, with the eyes not always being most viewed: this related to positive anticipation when viewing humans, whilst when viewing dogs, the mouth was viewed more or equally compared to the eyes for all emotions. We further found that children's labelling (Emotion Categorisation Accuracy, ECA) was better for the perceived valence than for emotion category, with positive human expressions easier than both positive and negative dog expressions. They performed poorly when asked to freely label facial expressions, but performed better for human than dog expressions. Finally, we found some effects of age, sex, and other factors (e.g., experience with dogs) on both PVT and ECA. Our study shows that children have a different gaze pattern and identification accuracy compared to adults, for viewing faces of human adults and dogs. We suggest that for recognising human (own-face-type) expressions, familiarity obtained through casual social interactions may be sufficient; but for recognising dog (other-face-type) expressions, explicit training may be required to develop competence.

Highlights

  • We conducted an eye-tracking experiment to investigate how children view and categorise facial expressions in adult humans and dogs
  • Children's viewing patterns were significantly dependent upon the facial region, species, and emotion viewed
  • Children's categorisation also varied with the species and emotion viewed, with better performance for valence than emotion categories
  • Own-face-types (adult humans) are easier than other-face-types (dogs) for children, and casual familiarity (e.g., through family dogs) to the latter is not enough to achieve perceptual competence
  相似文献   
259.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号