首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   254篇
  免费   29篇
  国内免费   64篇
  347篇
  2023年   9篇
  2022年   11篇
  2021年   16篇
  2020年   28篇
  2019年   24篇
  2018年   32篇
  2017年   27篇
  2016年   15篇
  2015年   23篇
  2014年   18篇
  2013年   35篇
  2012年   12篇
  2011年   12篇
  2010年   8篇
  2009年   3篇
  2008年   5篇
  2007年   11篇
  2006年   7篇
  2005年   5篇
  2004年   3篇
  2003年   5篇
  2002年   6篇
  2001年   1篇
  2000年   3篇
  1998年   2篇
  1997年   7篇
  1996年   1篇
  1995年   2篇
  1994年   1篇
  1992年   1篇
  1991年   2篇
  1990年   2篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   1篇
  1983年   3篇
  1977年   1篇
排序方式: 共有347条查询结果,搜索用时 0 毫秒
261.
262.
Little is known of the retest reliability of emotional cognitive tasks or the impact of using different tasks employing similar emotional stimuli within a battery. We investigated this in healthy subjects. We found improved overall performance in an emotional attentional blink task (EABT) with repeat testing at one hour and one week compared to baseline, but the impact of an emotional stimulus on performance was unchanged. Similarly, performance on a facial expression recognition task (FERT) was better one week after a baseline test, though the relative effect of specific emotions was unaltered. There was no effect of repeat testing on an emotional word categorising, recall and recognition task. We found no difference in performance in the FERT and EABT irrespective of task order. We concluded that it is possible to use emotional cognitive tasks in longitudinal studies and combine tasks using emotional facial stimuli in a single battery.  相似文献   
263.
Facial EMG activity was measured from the Corrugator supercilii and the Zygomatic major muscle regions while 48 subjects were exposed to pictures of angry and happy facial expressions, snakes and flowers as well as low and high preference nature scenes. The valency perspective predicted that facial reactions should be related to the intensity of the positive and the negative valency of stimuli. The mimicking behavior approach predicted that facial reactions should only be reflected as a mimicking response to the facial stimuli, whereas the evolutionary:biological perspective predicted that the most clearcut positive and negative facial reactions should be evoked by facial stimuli and by snakes. In support of the latter perspective, the present results showed that angry faces and snakes evoked the most distinct Corrugator supercilii muscle response, whereas happy faces evoked the largest Zygomatic major muscle response.  相似文献   
264.
This review focuses on facial asymmetries during emotional expression. Facial asymmetry is defined as the expression intensity or muscular involvement on one side of the face (“hemiface”) relative to the other side and has been used as a behavioral index of hemispheric specialization for facial emotional expression. This paper presents a history of the neuropsychological study of facial asymmetry, originating with Darwin. Both quantitative and qualitative aspects of asymmetry are addressed. Next, neuroanatomical bases for facial expression are elucidated, separately for posed/voluntary and spontaneous/involuntary elicitation conditions. This is followed by a comprehensive review of 49 experiments of facial asymmetry in the adult literature, oriented around emotional valence (pleasantness/unpleasantness), elicitation condition, facial part, social display rules, and demographic factors. Results of this review indicate that the left hemiface is more involved than the right hemiface in the expression of facial emotion. From a neuropsychological perspective, these findings implicate the right cerebral hemisphere as dominant for the facial expression of emotion. In spite of the compelling evidence for right-hemispheric specialization, some data point to the possibility of differential hemispheric involvement as a function of emotional valence. An earlier version of this paper by the first author was presented at the XV Annual Symposium of the Society of Craniofacial Genetics, July 12, 1992, Stanford University, Palo Alto, CA.  相似文献   
265.
Using a judgment and component analysis of facial actions, 14 muscle-contraction headache (MCH) patients were videotaped in headache and nonheadache states. In addition, patients were also required to undergo a resting physiological assessment (frontalis electromyography, temporal blood volume pulse, and heart rate), reaction-time task, and complete self-report measures of pain state and mood. Headache and nonheadache state of MCH patients were reliably identified by 20 observers. Characteristics of facial expressions that occurred most frequently in the headache state included furrowed eyebrows, closed eyes, slow eye blinks, lip pursuing, facial grimacing, and flat facial affect. Headache state was also associated with increased latency to respond to an auditory tone and mood disturbances, but no differences in baseline physiological activity were observed. Our findings provide support for the utility and clinical relevance of judgment and component analysis of facial actions in MCH patients.  相似文献   
266.
This study investigated whether subjects high and low in public speaking fear react with different facial electromyographic (EMG) activities when exposed to negative and positive social stimuli. A High-fear and Low-fear group were selected by help of a questionnaire and were exposed to slides of angry and happy faces while facial-EMG from the corrugator and zygomatic muscle regions were measured. The subjects also rated the stimuli on different emotional dimensions. Consistent with earlier research it was found that Low fear subjects reacted with increased corrugator activity to angry faces and increased zygomatic activity to happy faces. The High fear group, on the other hand, did not distinguish between angry and happy faces. Rating data indicated that the High fear group perceived angry faces as being emotionally more negative. The present results are consistent with earlier studies, indicating that the facial-EMG technique is sensitive to detect differential responding among clinical interesting groups, such as people suffering from social fears.  相似文献   
267.
Multiple facial cues such as facial expression and face gender simultaneously influence facial trustworthiness judgement in adults. The current work was to examine the effect of multiple facial cues on trustworthiness judgement across age groups. Eight-, 10-year-olds, and adults detect trustworthiness from happy and neutral adult faces (female and male faces) in Experiment 1. Experiment 2 included both adult and child faces wearing happy, angry, and neutral expressions. Nine-, 11-, 13-year-olds, and adults had to rate facial trustworthiness with a 7-point Likert scale. The results of Experiments 1 and 2 revealed that facial expression and face gender independently affected facial trustworthiness judgement in children aged 10 and below but simultaneously affected judgement in children aged 11 and above, adolescents, and adults. There was no own-age bias in children and adults. The results showed that children younger than 10 could not process multiple facial cues in the same manner as in older children and adults when judging trustworthiness. The current findings provide evidence for the stable-feature account, but not for the own-age bias account or the expertise account.  相似文献   
268.
Social-rank cues communicate social status or social power within and between groups. Information about social-rank is fluently processed in both visual and auditory modalities. So far, the investigation on the processing of social-rank cues has been limited to studies in which information from a single modality was assessed or manipulated. Yet, in everyday communication, multiple information channels are used to express and understand social-rank. We sought to examine the (in)voluntary nature of processing of facial and vocal signals of social-rank using a cross-modal Stroop task. In two experiments, participants were presented with face-voice pairs that were either congruent or incongruent in social-rank (i.e. social dominance). Participants’ task was to label face social dominance while ignoring the voice, or label voice social dominance while ignoring the face. In both experiments, we found that face-voice incongruent stimuli were processed more slowly and less accurately than were the congruent stimuli in the face-attend and the voice-attend tasks, exhibiting classical Stroop-like effects. These findings are consistent with the functioning of a social-rank bio-behavioural system which consistently and automatically monitors one’s social standing in relation to others and uses that information to guide behaviour.  相似文献   
269.
Psychological factors are known to play an important part in the origin of many medical conditions including hypertension. Recent studies have reported elevated blood pressure (even in the normal range of variation) to be associated with a reduced responsiveness to emotions or ‘emotional dampening’. Our aim was to assess emotional dampening in individuals with more extreme blood pressure levels including prehypertensives (N = 58) and hypertensives (N = 60) by comparing their emotion recognition ability with normotensives (N = 57). Participants completed novel facial emotion matching and facial emotion labelling tasks following blood pressure measurement and their accuracy of emotion recognition and average response times were compared. The normotensives demonstrated a significantly higher accuracy of emotion recognition than the prehypertensives and the hypertensives in labelling of facial emotions. This difference generalised to the task where two facial halves (upper & lower) had to be matched on the basis of emotions. In neither the labelling nor matching emotion conditions did the groups differ in their speed of emotion processing. Findings of the present study extend reports of ‘emotional dampening’ to hypertensives as well as to those at-risk for developing hypertension (i.e. prehypertensives) and have important implications for understanding the psychological component of such medical conditions as hypertension.  相似文献   
270.
摘 要 面孔情绪识别过程中的多感觉通道效应指参与刺激加工的各个通道对面孔表情认知的综合影响。研究者们通过行为实验、事件相关电位以及脑成像技术等对该过程中的多个获取信息的通道进行研究,肢体表情、情绪性声音、特定气味能系统地影响面孔表情的情绪识别。一系列的研究对多通道效应的作用时间、潜在作用机制、相关激活脑区进行了探索。未来的研究可以整合脑网络的技术,并结合其他学科的新技术以更细致具体地考察这些通道下信息的物理属性所起的作用。 关键词 面孔表情 多通道效应 面孔情绪识别 肢体表情 情绪性声音 嗅觉信号  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号