首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   270篇
  免费   8篇
  国内免费   4篇
  282篇
  2024年   1篇
  2023年   1篇
  2022年   2篇
  2021年   4篇
  2020年   12篇
  2019年   11篇
  2018年   13篇
  2017年   12篇
  2016年   10篇
  2015年   6篇
  2014年   7篇
  2013年   93篇
  2012年   7篇
  2011年   9篇
  2010年   11篇
  2009年   24篇
  2008年   7篇
  2007年   15篇
  2006年   5篇
  2005年   7篇
  2004年   6篇
  2003年   4篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
  1990年   2篇
排序方式: 共有282条查询结果,搜索用时 15 毫秒
191.
Mimicry, the imitation of the nonverbal behaviour of others, serves to establish affiliation and to smoothen social interactions. The present research aimed to disentangle rapid facial reactions (RFRs) to affiliative emotions from RFRs to nonaffiliative emotions from a trait perspective. In line with the Mimicry in Social Context Model by Hess and Fischer, we expected that only the former are mimicry responses indicative of underlying social relating competence and predictive of social satisfaction, whereas the latter superficially resemble mimicry responses and are driven by social relating incompetence and have opposite effects on social satisfaction. Further, we assumed that social relating competence would moderate the relationship between stable individuals' tendencies to show (mal)adaptive RFRs and social satisfaction. To test these hypotheses, 108 participants first completed scales measuring social relating competence, then participated in a mimicry laboratory task and finally evaluated their naturally occurring social interactions for 10 days. Affiliative RFRs to sadness were related to proximal indices of social relating competence and predicted positive social interactions, whereas nonaffiliative RFRs to disgust were related to social relating incompetence and predicted negative social interactions. By contrast, neither affiliative RFRs to happiness nor nonaffiliative RFRs to anger were linked to proximal indices of social relating competence, and both RFRs were only (dys)functional for interaction quality in less social relating‐competent individuals. Copyright © 2015 European Association of Personality Psychology  相似文献   
192.
Facial expressions convey not only emotions but also communicative information. Therefore, facial expressions should be analysed to understand communication. The objective of this study is to develop an automatic facial expression analysis system for extracting nonverbal communicative information. This study focuses on specific communicative information: emotions expressed through facial movements and the direction of the expressions. We propose a multi-tasking deep convolutional network (DCN) to classify facial expressions, detect the facial regions, and estimate face angles. We reformulate facial region detection and face angle estimation as regression problems and add task-specific output layers in the DCN’s architecture. Experimental results show that the proposed method performs all tasks accurately. In this study, we show the feasibility of the multi-tasking DCN for extracting nonverbal communicative information from a human face.  相似文献   
193.
A framework for accounting for emotional phenomena proposed by Sokolov and Boucsein (2000) employs conceptual dimensions that parallel those of hue, brightness, and saturation in color vision. The approach that employs the concepts of emotional quality, intensity, and saturation has been supported by psychophysical emotional scaling data gathered from a few trained observers. We report cortical evoked potential data obtained during the change between different emotions expressed in schematic faces. Twenty-five subjects (13 male, 12 female) were presented with a positive, a negative, and a neutral computer-generated face with random interstimulus intervals in a within-subjects design, together with four meaningful and four meaningless control stimuli made up from the same elements. Frontal, central, parietal, and temporal ERPs were recorded from each hemisphere. Statistically significant outcomes in the P300 and N200 range support the potential fruitfulness of the proposed color-vision-model-based approach to human emotional space.  相似文献   
194.
People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.  相似文献   
195.
Previous research has highlighted theoretical and empirical links between measures of both personality and trait emotional intelligence (EI), and the ability to decode facial expressions of emotion. Research has also found that the posed, static characteristics of the photographic stimuli used to explore these links affects the decoding process and differentiates them from the natural expressions they represent. This undermines the ecological validity of established trait-emotion decoding relationships.This study addresses these methodological shortcomings by testing relationships between the reliability of participant ratings of dynamic, spontaneously elicited expressions of emotion with personality and trait EI. Fifty participants completed personality and self-report EI questionnaires, and used a computer-logging program to continuously rate change in emotional intensity expressed in video clips. Each clip was rated twice to obtain an intra-rater reliability score. The results provide limited support for links between both trait EI and personality variables and how reliably we decode natural expressions of emotion. Limitations and future directions are discussed.  相似文献   
196.
The goal of this research was to examine the effects of facial expressions on the speed of sex recognition. Prior research revealed that sex recognition of female angry faces was slower compared with male angry faces and that female happy faces are recognized faster than male happy faces. We aimed to replicate and extend the previous research by using different set of facial stimuli, different methodological approach and also by examining the effects of some other previously unexplored expressions (such as crying) on the speed of sex recognition. In the first experiment, we presented facial stimuli of men and women displaying anger, fear, happiness, sadness, crying and three control conditions expressing no emotion. Results showed that sex recognition of angry females was significantly slower compared with sex recognition in any other condition, while sad, crying, happy, frightened and neutral expressions did not impact the speed of sex recognition. In the second experiment, we presented angry, neutral and crying expressions in blocks and again only sex recognition of female angry expressions was slower compared with all other expressions. The results are discussed in a context of perceptive features of male and female facial configuration, evolutionary theory and social learning context.  相似文献   
197.
In this article, we describe a paradigm using text-based vignettes for the study of social and cultural norm violation. Towards this aim, a range of scenarios depicting instances of norm violations was generated and tested with respect to their ability in evoking subjective and physiological responses. In Experiment 1, participants evaluated 29 vignettes on how upsetting, excusable and realistic the described behaviour appeared to be. Based on those ratings we selected and extended three norm violation vignettes for Experiment 2 in which participants' physiological responses were obtained in addition to their subjective ratings. In both studies, the vignettes were successful in eliciting negative responses to norm violations and were significantly affected by the perceivers' level of ethnocultural empathy. The trait measure of cultural empathy further predicted facial electromyography (EMG) activity at muscle sites associated with disgust (M. Levator Labii), thereby suggesting a potential moral response to norm-violating scenarios. We discuss the methodological merits and implications of this vignettes paradigm for investigating perceived norm transgressions and make recommendations for future work.  相似文献   
198.
The present study examined the relationship between emotional reactivity (self-report and physiological reactivity) to pleasant, unpleasant, and neutral emotion-eliciting stimuli and experiential avoidance (EA). Sixty-two participants were separated into high and low experiential avoiders. Results indicated that high EA participants reported greater emotional experience to both unpleasant and pleasant stimuli compared to low EA participants. In contrast to their heightened reports of emotion, high EA participants displayed attenuated heart rate reactivity to the unpleasant stimuli relative to the low EA participants. These findings are interpreted as reflecting an emotion regulation attempt by high EA participants when confronted with unpleasant emotionally-evocative stimuli.  相似文献   
199.
We examined gender adaptation effects for the faces of children and adults and measured the transfer of these effects across age categories. Face gender adaptation is defined by a bias to classify the gender of a gender-neutral face to be opposite to that of an adapting face. An androgynous face, for example, appears male following adaptation to a female face. Participants adapted to male or female faces from the two age categories and classified the gender of morphed adult and child faces from a male–female morph trajectory. Gender adaptation effects were found for children's and adults’ faces and for the transfer between the age categories. The size of these effects was comparable when participants adapted to adult faces and identified the gender of either adult or child faces, and when participants adapted to child faces and identified the gender of child faces. A smaller adaptation effect was found when participants adapted to a child's face but identified the gender of an adult's face. The results indicate an interconnected and partially shared representation of the gender information for child and adult faces. The lack of symmetry in adaptation transfer between child and adult faces suggests that adaptation to adult faces is more effective than adaptation to child faces in activating a gender representation that generalizes across age categories.  相似文献   
200.
ABSTRACT

Recent gaze cueing studies using dynamic cue sequences have reported increased attention orienting by gaze with faces expressing fear, surprise or anger. Here, we investigated whether the type of dynamic cue sequence used impacted the magnitude of this effect. When the emotion was expressed before or concurrently with gaze shift, no modulation of gaze-oriented attention by emotion was seen. In contrast, when the face cue averted gaze before expressing an emotion (as if reacting to the object after first localizing it), the gaze orienting effect was clearly increased for fearful, surprised and angry faces compared to neutral faces. Thus, the type of dynamic sequence used, and in particular the order in which the gaze shift and the facial expression are presented, modulate gaze-oriented attention, with maximal modulation seen when the expression of emotion follows gaze shift.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号