首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   425篇
  免费   11篇
  国内免费   5篇
  2023年   3篇
  2022年   3篇
  2021年   12篇
  2020年   17篇
  2019年   14篇
  2018年   10篇
  2017年   14篇
  2016年   21篇
  2015年   6篇
  2014年   16篇
  2013年   151篇
  2012年   9篇
  2011年   32篇
  2010年   18篇
  2009年   31篇
  2008年   20篇
  2007年   19篇
  2006年   5篇
  2005年   9篇
  2004年   10篇
  2003年   5篇
  2002年   8篇
  2001年   2篇
  1999年   2篇
  1993年   2篇
  1988年   1篇
  1984年   1篇
排序方式: 共有441条查询结果,搜索用时 15 毫秒
71.
Variations in the serotonin transporter gene (5HTTLPR) and biased processing of face-emotion displays both have been implicated in the transmission of depression risk, but little is known about developmental influences on these relationships. Within a community sample of adolescents, we examine whether 5HTTLPR genotype moderates the link between maternal depressive history and errors in face-emotion labeling. When controlling for current levels of depression and anxiety among youth, a two-way interaction between maternal depressive history and 5HTTLPR genotype was detected. Specifically, adolescents whose mothers reported a depressive history and who had a low expressing genotype made more errors in classifying emotional faces when compared with adolescents with an intermediate or high expressing genotype, with or without maternal depression history. These findings highlight the complex manner in which maternal depression and genetic risk may interact to predict individual differences in social information processing.  相似文献   
72.
A highly familiar type of movement occurs whenever a person walks towards you. In the present study, we investigated whether this type of motion has an effect on face processing. We took a range of different 3D head models and placed them on a single, identical 3D body model. The resulting figures were animated to approach the observer. In a first series of experiments, we used a sequential matching task to investigate how the motion of an approaching person affects immediate responses to faces. We compared observers’ responses following approach sequences to their performance with figures walking backwards (receding motion) or remaining still. Observers were significantly faster in responding to a target face that followed an approach sequence, compared to both receding and static primes. In a second series of experiments, we investigated long-term effects of motion using a delayed visual search paradigm. After studying moving or static avatars, observers searched for target faces in static arrays of varying set sizes. Again, observers were faster at responding to faces that had been learned in the context of an approach sequence. Together these results suggest that the context of a moving body influences face processing, and support the hypothesis that our visual system has mechanisms that aid the encoding of behaviourally-relevant and familiar dynamic events.  相似文献   
73.
Little AC  DeBruine LM  Jones BC 《Cognition》2011,118(1):116-122
A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete representations for those groups. Here we examine how labelling influences the perception of category in faces differing in colour. We show category-contingent aftereffects following exposure to faces differing in eye spacing (wide versus narrow) for blue versus red faces when such groups are consistently labelled with socially meaningful labels (Extravert versus Introvert; Soldier versus Builder). Category-contingent aftereffects were not seen using identical methodology when labels were not meaningful or were absent. These data suggest that human representations of faces can be rapidly tuned to code for meaningful social categories and that such tuning requires both a label and an associated visual difference. Results highlight the flexibility of the cognitive visual system to discriminate categories even in adulthood.  相似文献   
74.
When the bottom halves of two faces differ, people’s behavioral judgment of the identical top halves of those faces is impaired: they report that the top halves are different, and/or take more time than usual to provide a response. This behavioral measure is known as the composite face effect (CFE) and has traditionally been taken as evidence that faces are perceived holistically. Recently, however, it has been claimed that this effect is driven almost entirely by decisional, rather than perceptual, factors ( Richler, Gauthier, Wenger, & Palmeri, 2008). To disentangle the contribution of perceptual and decisional brain processes, we aimed to obtain an event-related potential (ERP) measure of the CFE at a stage of face encoding ( Jacques & Rossion, 2009) in the absence of a behavioral CFE effect. Sixteen participants performed a go/no-go task in an oddball paradigm, lifting a finger of their right or left hand when the top half of a face changed identity. This change of identity of the top of the face was associated with an increased ERP signal on occipito-temporal electrode sites at the N170 face-sensitive component (∼160 ms), the later decisional P3b component, and the lateralized readiness potential (LRP) starting at ∼350 ms. The N170 effect was observed equally early when only the unattended bottom part of the face changed, indicating that an identity change was perceived across the whole face in this condition. Importantly, there was no behavioral response bias for the bottom change trials, and no evidence of decisional biases from electrophysiological data (no P3b and LRP deflection in no-go trials). These data show that an early CFE can be measured in ERPs in the absence of any decisional response bias, indicating that the CFE reflects primarily the visual perception of the whole face.  相似文献   
75.
Hughes and Nicholson (2010) suggest that recognizing oneself is easier from face vs. voice stimuli, that a combined presentation of face and voice actually inhibits self-recognition relative to presentation of face or voice alone, that the left hemisphere is superior in self-recognition to the right hemisphere, and that recognizing self requires more effort than recognizing others. A re-examination of their method, data, and analyses unfortunately shows important ceiling effects that cast doubts on these conclusions.  相似文献   
76.
Normal observers demonstrate a bias to process the left sides of faces during perceptual judgments about identity or emotion. This effect suggests a right cerebral hemisphere processing bias. To test the role of the right hemisphere and the involvement of configural processing underlying this effect, young and older control observers and patients with right hemisphere damage completed two chimeric faces tasks (emotion judgment and face identity matching) with both upright and inverted faces. For control observers, the emotion judgment task elicited a strong left-sided perceptual bias that was reduced in young controls and eliminated in older controls by face inversion. Right hemisphere damage reversed the bias, suggesting the right hemisphere was dominant for this task, but that the left hemisphere could be flexibly recruited when right hemisphere mechanisms are not available or dominant. In contrast, face identity judgments were associated most clearly with a vertical bias favouring the uppermost stimuli that was eliminated by face inversion and right hemisphere lesions. The results suggest these tasks involve different neurocognitive mechanisms. The role of the right hemisphere and ventral cortical stream involvement with configural processes in face processing is discussed.  相似文献   
77.
Facial expression and direction of gaze are two important sources of social information, and what message each conveys may ultimately depend on how the respective information interacts in the eye of the perceiver. Direct gaze signals an interaction with the observer but averted gaze amounts to "pointing with the eyes", and in combination with a fearful facial expression may signal the presence of environmental danger. We used fMRI to examine how gaze direction influences brain processing of facial expression of fear. The combination of fearful faces and averted gazes activated areas related to gaze shifting (STS, IPS) and fear-processing (amygdala, hypothalamus, pallidum). Additional modulation of activation was observed in motion detection areas, in premotor areas and in the somatosensory cortex, bilaterally. Our results indicate that the direction of gaze prompts a process whereby the brain combines the meaning of the facial expression with the information provided by gaze direction, and in the process computes the behavioral implications for the observer.  相似文献   
78.
Atypical processing of eye contact is one of the significant characteristics of individuals with autism, but the mechanism underlying atypical direct gaze processing is still unclear. This study used a visual search paradigm to examine whether the facial context would affect direct gaze detection in children with autism. Participants were asked to detect target gazes presented among distracters with different gaze directions. The target gazes were either direct gaze or averted gaze, which were either presented alone (Experiment 1) or within facial context (Experiment 2). As with the typically developing children, the children with autism, were faster and more efficient to detect direct gaze than averted gaze, whether or not the eyes were presented alone or within faces. In addition, face inversion distorted efficient direct gaze detection in typically developing children, but not in children with autism. These results suggest that children with autism use featural information to detect direct gaze, whereas typically developing children use configural information to detect direct gaze.  相似文献   
79.
A critical question in Cognitive Science concerns how knowledge of specific domains emerges during development. Here we examined how limitations of the visual system during the first days of life may shape subsequent development of face processing abilities. By manipulating the bands of spatial frequencies of face images, we investigated what is the nature of the visual information that newborn infants rely on to perform face recognition. Newborns were able to extract from a face the visual information lying from 0 to 1 cpd (Experiment 1), but only a narrower 0-0.5 cpd spatial frequency range was successful to accomplish face recognition (Experiment 2). These results provide the first empirical support of a low spatial frequency advantage in individual face recognition at birth and suggest that early in life low-level, non-specific perceptual constraints affect the development of the face processing system.  相似文献   
80.
Wong JH  Peterson MS  Thompson JC 《Cognition》2008,108(3):719-731
The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments required memory for two or four objects. Memory capacity was compared between remembering four objects from a single object category to remembering four objects from two different categories. Two-category sets led to increased memory capacity only when upright faces were included. Capacity for face-only sets never exceeded their nonface counterparts, and the advantage for two-category sets when faces were one of the categories disappeared when inverted faces were used. These results suggest that two-category sets which include faces are advantaged in working memory but that faces alone do not lead to a memory capacity advantage.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号