首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   862篇
  免费   37篇
  国内免费   18篇
  2023年   9篇
  2022年   15篇
  2021年   33篇
  2020年   37篇
  2019年   55篇
  2018年   40篇
  2017年   47篇
  2016年   38篇
  2015年   44篇
  2014年   34篇
  2013年   216篇
  2012年   29篇
  2011年   50篇
  2010年   35篇
  2009年   46篇
  2008年   43篇
  2007年   44篇
  2006年   18篇
  2005年   15篇
  2004年   16篇
  2003年   15篇
  2002年   9篇
  2001年   5篇
  2000年   4篇
  1999年   2篇
  1998年   4篇
  1997年   4篇
  1996年   2篇
  1994年   1篇
  1993年   2篇
  1990年   2篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
排序方式: 共有917条查询结果,搜索用时 31 毫秒
181.
The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set “inertia”) than processing the age or sex of a face  相似文献   
182.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
183.
An important determinant of picture and word naming speed is the age at which the names were learned (age of acquisition). Two related interpretations of these effects are that they reflect differences between words in their cumulative frequency of use, or that they reflect differences in the amount of time early and lateacquired words have spent in lexical memory. Both theories predict that differences between early and late-acquired words will be less apparent in older than younger adults. Two experiments are reported in which younger and older adults read words varying in age of acquisition or frequency, or named objects varying in age of acquisition. There was an observed effect of word frequency only for young adults' word naming. In contrast, strong age of acquisition effects were found for both the young and the old participants. The implications of these results for theories of how age of acquisition might affect lexical processing are discussed.  相似文献   
184.
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched normal controls volunteered to participate. The participants were individually presented with morphed photographs of facial emotion expressions over multiple trials. They were requested to classify each of these morphed photographs according to Ekman's six basic emotion categories. The findings indicated that the clinical participants had impaired facial emotion recognition, though no clear lateralization pattern of impairment was observed. The patients with localized thalamic damage performed significantly worse in recognizing sadness than the controls. Longitudinal studies on patients with subcortical brain damage should be conducted to examine how cognitive reorganization post-stroke would affect emotion recognition.  相似文献   
185.
Previous research has produced conflicting findings on whether or not patients with subclinical or manifest obsessive-compulsive disorder (OCD) share an attentional bias for anxiety-related material. In the present study, 35 OCD patients were compared with 20 healthy controls on their performance in an emotional Stroop paradigm. Nine different stimulus conditions were compiled, including sets for depression-related and anxiety-related words as well as stimuli from two constructs with a potential relevance for the pathogenesis and maintenance of OCD symptomatology: responsibility and conscientiousness. Patients did not show enhanced interference for any of the conditions. Syndrome subtype and severity, avoidance and speed of information processing did not moderate results. The present study concurs with most prior research that OCD patients display no interference effect for general threat words. It deserves further consideration, that emotional interference effects in OCD as seen in other anxiety disorders occur when using idiosyncratic word material with a direct relation to the individual's primary concerns.  相似文献   
186.
Several convergent lines of evidence have suggested that the presence of an emotion signal in a visual stimulus can influence processing of that stimulus. In the current study, we picked up on this idea, and explored the hypothesis that the presence of an emotional facial expression (happiness) would facilitate the identification of familiar faces. We studied two groups of normal participants (overall N=54), and neurological patients with either left (n=8) or right (n=10) temporal lobectomies. Reaction times were measured while participants named familiar famous faces that had happy expressions or neutral expressions. In support of the hypothesis, naming was significantly faster for the happy faces, and this effect obtained in the normal participants and in both patient groups. In the patients with left temporal lobectomies, the effect size for this facilitation was large (d=0.87), suggesting that this manipulation might have practical implications for helping such patients compensate for the types of naming defects that often accompany their brain damage. Consistent with other recent work, our findings indicate that emotion can facilitate visual identification, perhaps via a modulatory influence of the amygdala on extrastriate cortex.  相似文献   
187.
The present experiment was designed to better understand the impact of positive and negative emotional processing among low- and high-hostile individuals. Based on previous research which found increased sympathovagal balance among low-hostiles to the negative version of the Affective Auditory Verbal Learning Test (AAVL), it was hypothesized that low-hostiles would experience increased cortical arousal to this stimulus whereas their high-hostile counterparts would not. As expected, low-hostiles experienced significantly reduced low-alpha power (7.5-9.5Hz) relative to high-hostiles during the presentation of the negative AAVL. In a replication of prior research, significant primacy and recency effects were noted for the negative and positive word lists, respectively. Results are discussed in terms of cerebral activation theory and the potential impact of emotional processing among high-hostile individuals and their likelihood to develop coronary heart disease.  相似文献   
188.
189.
Processing Faces and Facial Expressions   总被引:10,自引:0,他引:10  
This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from traditional approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.  相似文献   
190.
The hypotheses of this investigation were based on attachment theory and Bowlby's conception of "internal working models", supposed to consist of one mainly emotional (model-of-self) and one more conscious cognitive structure (model-of-others), which are assumed to operate at different temporal stages of information processing. Facial muscle reactions in individuals with positive versus negative internal working models were compared at different stages of information processing. The Relationship Scale Questionnaire (RSQ) was used to categorize subjects into positive or negative model-of-self and model-of-others and the State-Trait Anxiety Inventory was used to measure trait anxiety (STAI-T). Pictures of happy and angry faces followed by backward masking stimuli were exposed to 61 subjects at three different exposure times (17 ms, 56 ms, 2,350 ms) in order to elicit reactions first at an automatic level and then consecutively at more cognitively elaborated levels. Facial muscle reactions were recorded by electromyography (EMG), a higher corrugator activity representing more negative emotions and a higher zygomaticus activity more positive emotions. In line with the hypothesis, subjects with a negative model-of-self scored significantly higher on STAI-T than subjects with a positive model-of-self. They also showed an overall stronger corrugator than zygomatic activity, giving further evidence of a negative tonic affective state. At the longest exposure time (2,350 ms), representing emotionally regulated responses, negative model-of-self subjects showed a significantly stronger corrugator response and reported more negative feelings than subjects with a positive model-of-self. These results supported the hypothesis that subjects with a negative model-of-self would show difficulties in self-regulation of negative affect. In line with expectations, model-of-others, assumed to represent mainly knowledge structures, did not interact with the physiological emotional measures employed, facial muscle reactions or tonic affective state.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号