首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   270篇
  免费   8篇
  国内免费   4篇
  2024年   1篇
  2023年   1篇
  2022年   2篇
  2021年   4篇
  2020年   12篇
  2019年   11篇
  2018年   13篇
  2017年   12篇
  2016年   10篇
  2015年   6篇
  2014年   7篇
  2013年   93篇
  2012年   7篇
  2011年   9篇
  2010年   11篇
  2009年   24篇
  2008年   7篇
  2007年   15篇
  2006年   5篇
  2005年   7篇
  2004年   6篇
  2003年   4篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
  1990年   2篇
排序方式: 共有282条查询结果,搜索用时 15 毫秒
71.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
72.
The Dangerous Decisions Theory (DDT; Porter & ten Brinke, 2009) posits that instantaneous perceptions of trustworthiness based on a stranger's face influence the manner in which ensuing information about the target is processed. This study tested a bi-directional DDT model, proposing that information concerning a target's moral behavior could distort eyewitness memory for the individual's facial trustworthiness. Participants (N = 141) viewed a target individual's face (previously rated as appearing “neutral” on trustworthiness) and then were exposed to one of the three vignettes describing the target's behavior (either immoral, morally neutral, or altruistic). Following a delay, observers were asked to identify the target individual on a facial morph video (continuously ranging in levels of perceived trustworthiness). Results indicated that behavioral information varying in morality influenced facial recognition memory; specifically, faces were recalled as having less trustworthy features following a disclosure of immoral/criminal behavior.  相似文献   
73.
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched normal controls volunteered to participate. The participants were individually presented with morphed photographs of facial emotion expressions over multiple trials. They were requested to classify each of these morphed photographs according to Ekman's six basic emotion categories. The findings indicated that the clinical participants had impaired facial emotion recognition, though no clear lateralization pattern of impairment was observed. The patients with localized thalamic damage performed significantly worse in recognizing sadness than the controls. Longitudinal studies on patients with subcortical brain damage should be conducted to examine how cognitive reorganization post-stroke would affect emotion recognition.  相似文献   
74.
Several convergent lines of evidence have suggested that the presence of an emotion signal in a visual stimulus can influence processing of that stimulus. In the current study, we picked up on this idea, and explored the hypothesis that the presence of an emotional facial expression (happiness) would facilitate the identification of familiar faces. We studied two groups of normal participants (overall N=54), and neurological patients with either left (n=8) or right (n=10) temporal lobectomies. Reaction times were measured while participants named familiar famous faces that had happy expressions or neutral expressions. In support of the hypothesis, naming was significantly faster for the happy faces, and this effect obtained in the normal participants and in both patient groups. In the patients with left temporal lobectomies, the effect size for this facilitation was large (d=0.87), suggesting that this manipulation might have practical implications for helping such patients compensate for the types of naming defects that often accompany their brain damage. Consistent with other recent work, our findings indicate that emotion can facilitate visual identification, perhaps via a modulatory influence of the amygdala on extrastriate cortex.  相似文献   
75.
The hypotheses of this investigation were based on attachment theory and Bowlby's conception of "internal working models", supposed to consist of one mainly emotional (model-of-self) and one more conscious cognitive structure (model-of-others), which are assumed to operate at different temporal stages of information processing. Facial muscle reactions in individuals with positive versus negative internal working models were compared at different stages of information processing. The Relationship Scale Questionnaire (RSQ) was used to categorize subjects into positive or negative model-of-self and model-of-others and the State-Trait Anxiety Inventory was used to measure trait anxiety (STAI-T). Pictures of happy and angry faces followed by backward masking stimuli were exposed to 61 subjects at three different exposure times (17 ms, 56 ms, 2,350 ms) in order to elicit reactions first at an automatic level and then consecutively at more cognitively elaborated levels. Facial muscle reactions were recorded by electromyography (EMG), a higher corrugator activity representing more negative emotions and a higher zygomaticus activity more positive emotions. In line with the hypothesis, subjects with a negative model-of-self scored significantly higher on STAI-T than subjects with a positive model-of-self. They also showed an overall stronger corrugator than zygomatic activity, giving further evidence of a negative tonic affective state. At the longest exposure time (2,350 ms), representing emotionally regulated responses, negative model-of-self subjects showed a significantly stronger corrugator response and reported more negative feelings than subjects with a positive model-of-self. These results supported the hypothesis that subjects with a negative model-of-self would show difficulties in self-regulation of negative affect. In line with expectations, model-of-others, assumed to represent mainly knowledge structures, did not interact with the physiological emotional measures employed, facial muscle reactions or tonic affective state.  相似文献   
76.
Since blushing is difficult to detect in people with dark skin, their experience of blushing may differ fundamentally from people with fair skin. To investigate this issue, cheek temperature and forehead blood flow were measured in 16 Caucasians and 16 Indians during mental arithmetic and singing. Caucasians (particularly females) thought that they blushed more intensely than Indians, and also reported greater self-consciousness when singing. Vascular responses did not differ between groups. However, skin tone moderated the association between vascular responses and ratings of self-consciousness, blushing intensity, blushing propensity and fear of negative evaluation. These findings support the notion that the visibility of blushing influences the nature of emotions experienced in embarrassing social encounters.  相似文献   
77.
Theorists have long postulated that facial properties such as emotion and sex are potent social stimuli that influence how individuals act. Yet extant scientific findings were mainly derived from investigations on the prompt motor response upon the presentation of affective stimuli, which were mostly delivered by means of pictures, videos, or text. A theoretical question remains unaddressed concerning how the perception of emotion and sex would modulate the dynamics of a continuous coordinated behaviour. Conceived in the framework of dynamical approach to interpersonal motor coordination, the present study aimed to address this question by adopting the coupled-oscillators paradigm. Twenty-one participants performed in-phase and anti-phase coordination with two avatars (male and female) displaying three emotional expressions (neutral, happy, and angry) at different frequencies (100% and 150% of the participant's preferred frequency) by executing horizontal rhythmic left-right oscillatory movements. Time to initiate movement (TIM), mean relative phase error (MnRP), and standard deviation of relative phase (SDRP) were calculated as indices of reaction time, deviation from the intended pattern of coordination, and coordination stability, respectively. Results showed that in anti-phase condition at 150% frequency, MnRP was lower with the angry and the female avatar. In addition, coordination was found to be more stable with the male avatar than the female one when both displaying neutral emotion. But the happy female avatar was found to elicit more stable coordination than the neutral female avatar. These results implied that individuals are more relaxed to coordinate with the female than the male, and the sensorimotor system becomes more flexible to coordinate with an angry person. It is also suggested social roles influence how people coordinate, and individuals attend more to interact with a happy female. In sum, the present study evidenced that social perception is embodied in the interactive behaviour during social interaction.  相似文献   
78.
Several systems for measuring pain behaviour have been developed for clinical settings. The present study reports on a real-time system for coding five categories of pain behaviour for low-back pain patients: guarding, touching, sounds, words, and facial expression. Unique features of the system are the use of refined measures of facial expression and integration of the measurements with a standardized physical examination. 176 sub-acute and chronic low-back pain patients underwent a physical examination while their pain behaviour was coded. Concurrent measures of subjective pain, medically-incongruent signs, and independent global ratings of pain behaviour were taken. Analyses indicated that the pain behaviours, particularly guarding and facial expression, varied systematically with the alternative measures, supporting the concurrent validity of the behaviour observation system. While pain behaviours, especially use of words and facial expressions, were significantly associated with the examiners' independent ratings, the strength of the associations suggested that, in the absence of direct training, examiners' performance was relatively poor. Implications for training of clinicians in detecting pain behaviour are discussed.  相似文献   
79.
The experiment tested whether patients with social phobia direct their attention to or away from faces with a range of emotional expressions. A modified dot probe paradigm (J. Abnorm. Psychol. 95 (1986) 15) measured whether participants attended more to faces or to household objects. Twenty patients with social phobia were faster in identifying the probe when it occurred in the location of the household objects, regardless of whether the facial expressions were positive, neutral, or negative. In contrast, controls did not exhibit an attentional preference. The results are in line with recent theories of social phobia that emphasize the role of reduced processing of external social cues in maintaining social anxiety.  相似文献   
80.
The aim was to explore whether people high as opposed to low in speech anxiety react with a more pronounced differential facial response when exposed to angry and happy facial stimuli. High and low fear participants were selected based on their scores on a fear of public speaking questionnaire. All participants were exposed to pictures of angry and happy faces while facial electromyographic (EMG) activity from the Corrugator supercilii and the Zygomaticus major muscle regions was recorded. Skin conductance responses (SCR), heart rate (HR) and ratings were also collected. Participants high as opposed to low in speech anxiety displayed a larger differential corrugator responding, indicating a larger negative emotional reaction, between angry and happy faces. They also reacted with a larger differential zygomatic responding, indicating a larger positive emotional reaction, between happy and angry faces. Consistent with the facial reaction patterns, the high fear group rated angry faces as more unpleasant and as expressing more disgust, and further rated happy faces as more pleasant. There were no differences in SCR or HR responding between high and low speech anxiety groups. The present results support the hypothesis that people high in speech anxiety are disposed to show an exaggerated sensitivity and facial responsiveness to social stimuli.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号