首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   111篇
  免费   8篇
  国内免费   14篇
  2023年   2篇
  2022年   5篇
  2021年   6篇
  2020年   9篇
  2019年   9篇
  2018年   8篇
  2017年   5篇
  2016年   2篇
  2015年   3篇
  2014年   7篇
  2013年   25篇
  2012年   5篇
  2011年   4篇
  2010年   3篇
  2009年   6篇
  2008年   1篇
  2007年   5篇
  2006年   1篇
  2005年   3篇
  2004年   2篇
  2003年   3篇
  2002年   6篇
  2001年   1篇
  2000年   3篇
  1999年   5篇
  1997年   1篇
  1992年   1篇
  1984年   1篇
  1983年   1篇
排序方式: 共有133条查询结果,搜索用时 15 毫秒
121.
Carmel D  Bentin S 《Cognition》2002,83(1):1-29
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.  相似文献   
122.
This research investigated infants’ scanning of a talking, socially engaging face. Three- to four-month-olds looked equally at the mouth and eyes whereas 9-month-olds attended more to the eyes than mouth. These findings shed light on information infants’ seek from dynamic face stimuli.  相似文献   
123.
An experimental manipulation was conducted to test the hypothesis that monitoring for sleep-related threat during the day triggers a cycle of cognitive processes that includes increased negative thinking, increased use of safety behaviours, increased perceived impairment in functioning, and increased self-reported sleepiness. Forty-seven individuals with primary insomnia were randomly assigned to a monitoring group (instructed to monitor their body sensations), a no-monitoring group (instructed to distract from their body sensations), or a no-instruction group. The manipulations to monitor or not monitor were administered immediately on waking and participants were asked to continue the manipulation throughout the experimental day. The monitoring group reported more negative thoughts, the use of more safety behaviours, and more sleepiness during the day relative to the no-instruction group. These findings offer support to a recent cognitive model, which identifies daytime monitoring for sleep-related threat as a key factor in the maintenance of the daytime distress and difficulty functioning in chronic insomnia.  相似文献   
124.
We used the Remember–Know procedure (Tulving, 1985 Tulving, E. 1985. Memory and consciousness. Canadian Psychology, 26: 112. [Crossref], [Web of Science ®] [Google Scholar]) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   
125.
It is well known that we utilize internalized representations (or schemas) to direct our eyes when exploring visual stimuli. Interestingly, our schemas for human faces are known to reflect systematic differences that are consistent with one's level of racial prejudice. However, whether one's level or type of racial prejudice can differentially regulate how we visually explore faces that are the target of prejudice is currently unknown. Here, White participants varying in their level of implicit or explicit prejudice viewed Black faces and White faces (with the latter serving as a control) while having their gaze behaviour recorded with an eye-tracker. The results show that, regardless of prejudice type (i.e., implicit or explicit), participants high in racial prejudice examine faces differently than those low in racial prejudice. Specifically, individuals high in explicit racial prejudice were more likely to fixate on the mouth region of Black faces when compared to individuals low in explicit prejudice, and exhibited less consistency in their scanning of faces irrespective of race. On the other hand, individuals high in implicit racial prejudice tended to focus on the region between the eyes, regardless of face race. It therefore seems that racial prejudice guides target-race specific patterns of looking behaviour, and may also contribute to general patterns of looking behaviour when visually exploring human faces.  相似文献   
126.
Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made “remember”, “know”, and “new” judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.  相似文献   
127.
人们会迅速地对目标人物的面孔线索进行人格特质分析,进而形成对目标的人格第一印象。在这个面孔—人格知觉的过程中,知觉结果会受到知觉对象、知觉者以及两者交互作用的影响。本文基于这三个方面对影响面孔—人格知觉的因素进行了综述和展望,以期更系统、科学地研究与看待面孔—人格知觉,并为人际印象设计与管理等提供理论参考。  相似文献   
128.
Facial stimuli are widely used in behavioural and brain science research to investigate emotional facial processing. However, some studies have demonstrated that dynamic expressions elicit stronger emotional responses compared to static images. To address the need for more ecologically valid and powerful facial emotional stimuli, we created Dynamic FACES, a database of morphed videos (n?=?1026) from younger, middle-aged, and older adults displaying naturalistic emotional facial expressions (neutrality, sadness, disgust, fear, anger, happiness). To assess adult age differences in emotion identification of dynamic stimuli and to provide normative ratings for this modified set of stimuli, healthy adults (n?=?1822, age range 18–86 years) categorised for each video the emotional expression displayed, rated the expression distinctiveness, estimated the age of the face model, and rated the naturalness of the expression. We found few age differences in emotion identification when using dynamic stimuli. Only for angry faces did older adults show lower levels of identification accuracy than younger adults. Further, older adults outperformed middle-aged adults’ in identification of sadness. The use of dynamic facial emotional stimuli has previously been limited, but Dynamic FACES provides a large database of high-resolution naturalistic, dynamic expressions across adulthood. Information on using Dynamic FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   
129.
ABSTRACT

Healthy aging is associated with impairments in face recognition. While earlier research suggests that these impairments arise during memory retrieval, more recent findings suggest that earlier mechanisms, at the perceptual stage, may also be at play. However, results are often inconsistent and very few studies have included a non-face control stimulus to facilitate interpretation of results with respect to the implication of specialized face mechanisms vs. general cognitive factors. To address these issues, P100, N170 and P200 event-related potentials (ERPs) were measured during processing of faces and watches. For faces, age-related differences were found for P100, N170 and P200 ERPs. For watches, age-related differences were found for N170 and P200 ERPs. Older adults showed less selective and less lateralized N170 responses to faces, suggesting that ERPs can detect age-related de-differentiation of specialized face networks. We conclude that age-related impairments in face recognition arise in part from difficulties in the earliest perceptual stages of visual information processing. A working model is presented based on coarse-to-fine analysis of visually similar exemplars.  相似文献   
130.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号