首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   306篇
  免费   6篇
  国内免费   1篇
  2024年   1篇
  2022年   2篇
  2021年   4篇
  2020年   14篇
  2019年   13篇
  2018年   13篇
  2017年   16篇
  2016年   12篇
  2015年   9篇
  2014年   7篇
  2013年   107篇
  2012年   5篇
  2011年   11篇
  2010年   10篇
  2009年   21篇
  2008年   8篇
  2007年   14篇
  2006年   5篇
  2005年   7篇
  2004年   7篇
  2003年   4篇
  2002年   4篇
  2001年   4篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
  1992年   1篇
  1990年   2篇
  1988年   1篇
  1983年   1篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
排序方式: 共有313条查询结果,搜索用时 31 毫秒
161.
This study attempts to replicate and extend the findings from Davis and Stephan’s (2011) article investigating facial electromyographic (EMG) responses to individually directed or group-directed realistic threat. Using news footage from the Columbine school shootings as a third-person threatening stimulus, participants were instructed to view the clips while considering how they felt during the original events (individually primed) or how students felt during the original events (group-primed). EMG analysis of activity levels of the medial frontalis and the corrugator supercilii indicated differential activation based on the instructions. Individually primed participants experienced more fontalis activity and group-primed participants experienced more corrugator supercilii activity. These findings replicated the Davis and Stephan results and extended it to a third person-based intergroup threat.  相似文献   
162.
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.  相似文献   
163.
ObjectivesTwo studies examined whether observers’ personality traits contribute to prosocial responses to others’ facial expression of pain. Experiment 1 examined the personality traits that could account for observers’ variability in estimating others’ pain intensity. Experiment 2 questioned to what extent the contribution of personality traits on inclination to help people in pain depend on observers’ beliefs about pain’ characteristics.Method59 (experiment 1) and 76 (experiment 2) participants observed to 3-D realistic synthetic face movements mobilizing action units of pain, in order to estimate others’ pain. In experiment 2, painful localizations (e.g., chest, hand) were also manipulated. In each experiment, Big Five personality traits were assessed.ResultsExperiment 1 revealed that agreeableness and conscientiousness contributed to observers’ pain estimates across the increase of facial expression intensity. Experiment 2 showed that conscientiousness contributed to observers’ judgments whatever pain’ characteristics. Neuroticism was only salient for pain referring to life-threatening pain.ConclusionProsocial response to others’ pain depends on agreeableness, conscientiousness and neuroticism. However, these links are modulated by the pain behavior elicited and observers’ belief about the characteristic of pain.  相似文献   
164.
When we observe someone shift their gaze to a peripheral event or object, a corresponding shift in our own attention often follows. This social orienting response, joint attention, has been studied in the laboratory using the gaze cueing paradigm. Here, we investigate the combined influence of the emotional content displayed in two critical components of a joint attention episode: The facial expression of the cue face, and the affective nature of the to-be-localized target object. Hence, we presented participants with happy and disgusted faces as cueing stimuli, and neutral (Experiment 1), pleasant and unpleasant (Experiment 2) pictures as target stimuli. The findings demonstrate an effect of ‘emotional context’ confined to participants viewing pleasant pictures. Specifically, gaze cueing was boosted when the emotion of the gazing face (i.e., happy) matched that of the targets (pleasant). Demonstrating modulation by emotional context highlights the vital flexibility that a successful joint attention system requires in order to assist our navigation of the social world.  相似文献   
165.
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.  相似文献   
166.
Facial expression and gaze perception are thought to share brain mechanisms but behavioural interactions, especially from gaze-cueing paradigms, are inconsistent. We conducted a series of gaze-cueing studies using dynamic facial cues to examine orienting across different emotional expression and task conditions, including face inversion. Across experiments, at a short stimulus–onset asynchrony (SOA) we observed both an expression effect (i.e., faster responses when the face was emotional versus neutral) and a cue validity effect (i.e., faster responses when the target was gazed-at), but no interaction between validity and emotion. Results from face inversion suggest that the emotion effect may have been due to both facial expression and stimulus motion. At longer SOAs, validity and emotion interacted such that cueing by emotional faces, fearful faces in particular, was enhanced relative to neutral faces. These results converge with a growing body of evidence that suggests that gaze and expression are initially processed independently and interact at later stages to direct attentional orienting.  相似文献   
167.
According to the influential model of Bruce and Young (1986) socially relevant facial information is processed separately from facial information leading to individual face recognition. In recent years functional imaging has identified a network of distinct occipitotemporal cortex areas for the processing of these two kinds of information. Functionally it is not clear at which processing level the “social” and the “recognition” pathways diverge. The study of subjects with a profound face recognition and learning deficit (congenital prosopagnosia—cPA) promises for a better understanding of this issue. We therefore tested the perception of attractiveness (a cue of prime social importance) and distinctiveness (a facial feature related to recognition) in 14 people with cPA. Although attractiveness ratings were highly consistent with controls, cPA subjects' distinctiveness ratings showed random patterns. This dissociation of normal attractiveness processing and impaired distinctiveness processing in cPA helps to specifies the nature of the impairment in this condition while shedding light on the functional architecture of normal face processing.  相似文献   
168.
Facial expressions of familiar faces have been found to influence identification. In this study, we hypothesize that faces with emotional expression are encoded for both structural and variant information resulting in more robust identification. Eighty-eight participants were presented faces with repetition priming frequencies of 2, 5, 10, and 20 (learning stage) and then judged the faces in terms of familiarity (testing stage). Participants were randomized into one of the following conditions: The facial expression between learning and testing stage remained the same (F-F), faces with facial expression shown in the learning stage were neutralized in the testing stage (F-N), or faces with neutral expressions shown in the learning stage had emotional expressions in the testing stage (N-F). Results confirmed our hypothesis and implications are discussed.  相似文献   
169.
The present study examined the acquisition of social referencing skills in infants of mothers with symptoms of depression (n = 44). We aimed to determine if a short discrimination training could facilitate infants’ social referencing. Mothers were instructed to pose either joyful or fearful facial expressions to cue infants’ approach/avoidance responses toward an ambiguous object. Maternal expressions were correlated with pleasant or unpleasant events occurring after the infant's response. The results showed that after the intervention, infants looked at their mothers more frequently and reached or avoided the ambiguous object based on the preceding maternal expression. The results suggest that discrimination training procedures can establish social referencing in infants of mothers with symptoms of depression.  相似文献   
170.
Threat estimation is crucial for the adaptation of behavior to a dangerous situation. In anxiety, a bias to threat has been described as a core feature. Therefore, the sensitivity for threatening information in anxious individuals may have consequences for danger estimation. In this study, we used the affective priming paradigm to test the assumption that fearful expressions would facilitate danger detection in natural scenes in anxious individuals. Twenty-three high trait anxious individuals and 22 low trait anxious individuals participated in the study. They had to detect the potential threat of a target scene (neutral or threatening) following neutral or fearful face primes. High trait anxious participants detected threat more rapidly than low trait anxious participants, consistent with previous reports of emotional hypervigilance in anxiety. Furthermore, this effect was enhanced when the target scene followed a fearful expression: Only in anxious participants were reaction times shorter to detect danger following a fearful prime than a neutral prime. Our results tend to show that in anxiety, the hypervigilance to threat may be of an important value such as increasing the detection of a subsequent potential danger. Implication of attentional processes and attentional control is discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号