首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   98篇
  免费   8篇
  国内免费   11篇
  117篇
  2023年   2篇
  2022年   5篇
  2021年   5篇
  2020年   7篇
  2019年   7篇
  2018年   8篇
  2017年   5篇
  2016年   2篇
  2015年   3篇
  2014年   7篇
  2013年   24篇
  2012年   3篇
  2011年   4篇
  2010年   2篇
  2009年   6篇
  2008年   1篇
  2007年   4篇
  2006年   1篇
  2005年   1篇
  2004年   1篇
  2003年   3篇
  2002年   6篇
  2001年   1篇
  2000年   3篇
  1999年   2篇
  1997年   1篇
  1992年   1篇
  1984年   1篇
  1983年   1篇
排序方式: 共有117条查询结果,搜索用时 0 毫秒
41.
Historically, it was believed the perceptual mechanisms involved in individuating faces developed only very slowly over the course of childhood, and that adult levels of expertise were not reached until well into adolescence. Over the last 10 years, there has been some erosion of this view by demonstrations that all adult-like behavioural properties are qualitatively present in young children and infants. Determining the age of maturity, however, requires quantitative comparison across age groups, a task made difficult by the need to disentangle development in face perception from development in all the other cognitive factors that affect task performance. Here, we argue that full quantitative maturity is reached early, by 5-7 years at the latest and possibly earlier. This is based on a comprehensive literature review of results in the 5-years-to-adult age range, with particular focus on the results of the few previous studies that are methodologically suitable for quantitative comparison of face effects across age, plus three new experiments testing development of holistic/configural processing (faces versus objects, disproportionate inversion effect), ability to encode novel faces (assessed via implicit memory) and face-space (own-age bias).  相似文献   
42.
本研究分别在短时(60ms)和长时(无时间限制)条件下呈现匹配的正性和中性面孔,要求高、低社交焦虑被试选择更具威胁性的面孔,以验证两者对正性刺激是否存在主观解释偏差,并考察该偏差产生于认知加工的哪一阶段。结果发现,长时条件下,高焦虑组选择正性面孔的比例显著高于低焦虑组;短时条件下无显著差异。提示高社交焦虑个体对正性刺激存在主观、外显的解释偏差,他们更倾向于对正性刺激做出消极解释,并且这种偏差产生于认知加工的后期。  相似文献   
43.
44.
45.
Research has shown that anger faces represent a potent motivational incentive for individuals with high implicit power motive (nPower). However, it is well known that anger expressions can vary in intensity, ranging from mild anger to rage. To examine nPower-relevant emotional intensity processing in anger faces, an ERP oddball task with facial stimuli was utilized, with neutral expressions as the standard and targets varying on anger intensity (50%, 100%, or 150% emotive). Thirty-one college students participated in the experiment (15 low and 16 high nPower persons determined by the Picture Story Exercise, PSE). In comparison with low nPower persons, higher percentage of correct responses was observed for high nPower persons when both groups discriminated low-intensity (50% intensity) anger faces from neutral faces. ERPs between 100% and 150% anger expressions revealed that high-intensity (150%) anger expressions elicited larger P3a and late positive potential (LPP) amplitudes relative to prototypical (100% intensity) anger expressions for power-motivated individuals. Conversely, low nPower participants showed no differences at both P3a and LPP components. These findings demonstrate that persons with high nPower are sensitive to intensity changes in anger faces and their sensitivity increases with the intensity of anger faces.  相似文献   
46.
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.  相似文献   
47.
Background objectives: Studies suggest that the right hemisphere is dominant for emotional facial recognition. In addition, whereas some studies suggest the right hemisphere mediates the processing of all emotions (dominance hypothesis), other studies suggest that the left hemisphere mediates positive emotions the right mediates negative emotions (valence hypothesis). Since each hemisphere primarily attends to contralateral space, the goals of this study was to learn if emotional faces would induce a leftward deviation of attention and if the valence of facial emotional stimuli can influence the normal viewer’s spatial direction of attention. Methods: Seventeen normal right handed participants were asked to bisect horizontal lines that had all combinations of sad, happy or neutral faces at ends of these lines. During this task the subjects were never requested to look at these faces and there were no task demands that depended on viewing these faces. Results: Presentation of emotional faces induced a greater leftward deviation compared to neutral faces, independent of where (spatial position) these faces were presented. However, faces portraying negative emotions tended to induce a greater leftward bias than positive emotions. Conclusions: Independent of location, the presence of emotional faces influenced the spatial allocation of attention, such that normal subjects shift the direction of their attention toward left hemispace and this attentional shift appears to be greater for negative (sad) than positive faces (happy).  相似文献   
48.
Young and older adults searched for a unique face in a set of three schematic faces and identified a secondary feature of the target. The faces could be negative, positive, or neutral. Young adults were slower and less accurate in searching for a negative face among neutral faces when they had previewed a display of negative faces than when they had previewed neutral faces, indicating an emotional distractor previewing effect (DPE), but this effect was eliminated with inverted faces. The DPE is an index of inter-trial inhibition to keep attention away from previewed, non-target information. Older adults also showed such an emotional DPE, but it was present with both upright and inverted faces. These results show that, in general, both young and old participants are sensitive to trial history, yet the different patterns of results suggest that these two groups remember and use different types of perceptual information when searching through emotional faces.  相似文献   
49.
We compared the speed of discrimination for emotional and neutral faces in four experiments, using a forced-choice saccadic and manual reaction time task. Unmasked, brief (20 ms) bilateral presentation of schematic (Exp. 1) or naturalistic (Exp. 2) emotional/neutral face pairs, led to shorter discrimination of emotional stimuli in saccadic localisation task. When the effect of interference from emotional stimuli is ruled out by showing a pairing of the emotional or neutral face with an outline face, faster saccadic discrimination was obtained for fearful facial expression only (Exp. 3). The manual discrimination reaction time was not significantly different for emotional versus neutral stimuli. To explore the absence of a manual RT effect we manipulated the stimulus duration (20 ms vs. 500 ms: Exp. 4). Faster saccadic discrimination of emotional stimuli was observed at both durations. For manual responses, emotional bias was observed only at the longest duration (500 ms). Overall, comparison of saccadic and manual responses shows that faster discrimination of emotional/neutral stimuli can be carried out within the oculomotor system. In addition, emotional stimuli are processed preferentially to neutral face stimuli.  相似文献   
50.
This study identified components of attentional bias (e.g. attentional vigilance, attentional avoidance and difficulty with disengagement) that are critical characteristics of survivors of dating violence (DV). Eye movements were recorded to obtain accurate and continuous information regarding attention. DV survivors with high post-traumatic stress symptoms (DV-High PTSS group; n = 20) and low post-traumatic stress symptoms (DV-Low PTSS group; n = 22) and participants who had never experienced DV (NDV group; n = 21) were shown screens displaying emotional (angry, fearful and happy) faces paired with neutral faces and negative (angry and fearful) faces paired with happy faces for 10 s. The results indicate that the DV-High PTSS group spent longer dwelling on angry faces over time compared with the DV-Low PTSS and NDV groups. This result implies that the DV-High PTSS group focused on specific trauma-related stimuli but does not provide evidence of an attentional bias towards threatening stimuli in general.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号