首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   270篇
  免费   8篇
  国内免费   4篇
  2024年   1篇
  2023年   1篇
  2022年   2篇
  2021年   4篇
  2020年   12篇
  2019年   11篇
  2018年   13篇
  2017年   12篇
  2016年   10篇
  2015年   6篇
  2014年   7篇
  2013年   93篇
  2012年   7篇
  2011年   9篇
  2010年   11篇
  2009年   24篇
  2008年   7篇
  2007年   15篇
  2006年   5篇
  2005年   7篇
  2004年   6篇
  2003年   4篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
  1990年   2篇
排序方式: 共有282条查询结果,搜索用时 31 毫秒
31.
Faces provide identity- and emotion-related information—basic cues for mastering social interactions. Traditional models of face recognition suggest that following a very first initial stage the processing streams for facial identity and expression depart. In the present study we extended our previous multivariate investigations of face identity processing abilities to the speed of recognising facially expressed emotions. Analyses are based on a sample of N=151 young adults. First, we established a measurement model with a higher order factor for the speed of recognising facially expressed emotions (SRE). This model has acceptable fit without specifying emotion-specific relations between indicators. Next, we assessed whether SRE can be reliably distinguished from the speed of recognising facial identity (SRI) and found latent factors for SRE and SRI to be perfectly correlated. In contrast, SRE and SRI were both only moderately related to a latent factor for the speed of recognising non-face stimuli (SRNF). We conclude that the processing of facial stimuli—and not the processing of facially expressed basic emotions—is the critical component of SRE. These findings are at variance with suggestions of separate routes for processing facial identity and emotional facial expressions and suggest much more communality between these streams as far as the aspect of processing speed is concerned.  相似文献   
32.
We report data from an experiment that investigated the influence of gaze direction and facial expression on face memory. Participants were shown a set of unfamiliar faces with either happy or angry facial expressions, which were either gazing straight ahead or had their gaze averted to one side. Memory for faces that were initially shown with angry expressions was found to be poorer when these faces had averted as opposed to direct gaze, whereas memory for individuals shown with happy faces was unaffected by gaze direction. We suggest that memory for another individual's face partly depends on an evaluation of the behavioural intention of that individual.  相似文献   
33.
Embodied cognition model states that the “simulation process” is necessary to the recognition of emotional significance of face. The present research explored the contribution of frontal motor brain components (i.e. mainly premotor area) to embodied cognition by using rTMS stimulation, to produce a temporary disruption of this specific cortical site. Secondly, short and long stimulus duration conditions were included to verify the contribution of the “simulation process” in response to overt and covert emotional stimulus comprehension. Nineteen subjects were asked to detect emotion/no emotion (anger, fear, happiness, neutral) in these two experimental conditions, by using a backward masking procedure. Five-second rTMS (1 Hz) was delivered before the stimulus onset. False alarms (Fa) and RTs increased and Hits decreased when frontal premotor brain activity was disrupted, specifically in response to anger and fear, for both long and shortduration condition. Thus, the present results highlight the main role of the frontal motor system for emotion facial expression processing.  相似文献   
34.
Decision-making consists of several stages of information processing, including an anticipation stage and an outcome evaluation stage. Previous studies showed that the ventral striatum (VS) is pivotal to both stages, bridging motivation and action, and it works in concert with the ventral medial prefrontal cortex (vmPFC) and the amygdala. However, evidence concerning how the VS works together with the vmPFC and the amygdala came mainly from neuropathology and animal studies; little is known about the dynamics of this network in the functioning human brain. Here we used fMRI combined with dynamic causal modeling (DCM) to investigate the information flow along amygdalostriatal and corticostriatal pathways in a facial attractiveness guessing task. Specifically, we asked participants to guess whether a blurred photo of female face was attractive and to wait for a few seconds (“anticipation stage”) until an unblurred photo of feedback face, which was either attractive or unattractive, was presented (“outcome evaluation stage”). At the anticipation stage, the bilateral amygdala and VS showed higher activation for the “attractive” than for the “unattractive” guess. At the outcome evaluation stage, the vmPFC and the bilateral VS were more activated by feedback faces whose attractiveness was congruent with the initial guess than by incongruent faces; however, this effect was only significant for attractive faces, not for unattractive ones. DCM showed that at the anticipation stage, the choice-related information entered the amygdalostriatal pathway through the amygdala and was projected to the VS. At the evaluation stage, the outcome-related information entered the corticostriatal pathway through the vmPFC. Bidirectional connectivities existed between the vmPFC and VS, with the VS-to-vmPFC connectivity weakened by unattractive faces. These findings advanced our understanding of the reward circuitry by demonstrating the pattern of information flow along the amygdalostriatal and corticostriatal pathways at different stages of decision-making.  相似文献   
35.
The ability of high-functioning individuals with autism to perceive facial expressions categorically was studied using eight facial expression continua created via morphing software. Participants completed a delayed matching task and an identification task. Like undergraduate male participants (N = 12), performance on the identification task for participants with autism (N = 15) was predicted by performance on the delayed matching task for the angry-afraid, happy-sad, and happy-surprised continua. This result indicates a clear category boundary and suggests that individuals with autism do perceive at least some facial expressions categorically. As this result is inconsistent with findings from other studies of categorical perception in individuals with autism, possible explanations for these findings are discussed.  相似文献   
36.
Eyewitnesses often construct a “composite” face of a person they saw commit a crime, a picture that police use to identify suspects. We described a technique (Frowd, Bruce, Ross, McIntyre, & Hancock, 2007) based on facial caricature to facilitate recognition of these images: Correct naming substantially improves when composites are seen with progressive positive caricature, where distinctive information is enhanced, and then with progressive negative caricature, the opposite. Over the course of four experiments, the underpinnings of this mechanism were explored. Positive-caricature levels were found to be largely responsible for improving naming of composites, with some benefit from negative-caricature levels. Also, different frame-presentation orders (forward, reverse, random, repeated) facilitated equivalent naming benefit relative to static composites. Overall, the data indicate that composites are usually constructed as negative caricatures.  相似文献   
37.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   
38.
Previous studies have shown that spatial attention can be “captured” by irrelevant events, but only if the eliciting stimulus matches top-down attentional control settings. Here we explore whether similar principles hold for nonspatial attentional selection. Subjects searched for a coloured target letter embedded in an RSVP stream of letters inside a box centred on fixation. On critical trials, a distractor, consisting of a brief change in the colour of the box, occurred at various temporal lags prior to the target. In Experiment 1, the distractor produced a decrement in target detection, but only when it matched the target colour. Experiments 2 and 3 provide evidence that this effect does not reflect masking or the dispersion of spatial attention. The results establish that (1) nonspatial selection is subject to “capture”, (2) such capture is contingent on top-down attentional control settings, and (3) control settings for nonspatial capture can vary in specificity.  相似文献   
39.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   
40.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号