首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Past research suggests an aging-related positivity effect in orienting to faces. However, these studies have eschewed direct comparison of orienting when positive and negative faces are presented simultaneously, thereby potentially underestimating the degree to which emotional valence influences such effects. In the current study younger and older adults viewed face pairs for 1000 ms, and upon face-pair offset indicated the location of a dot that appeared in the former location of one of the faces, to assess attentional orienting. When shown negative–neutral pairs, both age groups were biased to attend to negative faces, but when shown positive–negative pairs only younger adults showed a bias toward negative; older adults showed a lack of orienting toward either emotional face. Results suggest younger adults have a negativity bias in attention orienting regardless of the valence of nearby stimuli, whereas older adults show an absence of this bias when positive information is present.  相似文献   

2.
Gaze direction plays a central role in face recognition. Previous research suggests that faces with direct gaze are better remembered than faces with averted gaze. We compared recognition of faces with direct versus averted gaze in male versus female participants. A total of 52 adults (23 females, 29 males) and 46 children (25 females, 21 males) completed a computerised task that assessed their recognition of faces with direct gaze and faces with averted gaze. Adult male participants showed superior recognition of faces with direct gaze compared to faces with averted gaze. There was no difference between recognition of direct and averted gaze faces for the adult female participants. Children did not demonstrate this sex difference; rather, both male and female youth participants showed better recognition of faces with direct gaze compared to averted gaze. A large body of previous research has revealed superior recognition of faces with direct, compared to averted gaze. However, relatively few studies have examined sex differences. Our findings suggest that gaze direction has differential effects on face recognition for adult males and females, but not for children. These findings have implications for previous explanations of better recognition for direct versus averted gaze.  相似文献   

3.
A signal-detection task was used to assess sex differences in emotional face recognition under conditions of uncertainty. Computer images of Ekman faces showing sad, angry, happy, and fearful emotional states were presented for 50 ms to thirty-six men and thirty-seven women. All participants monitored for presentation of either happy, angry, or sad emotional expressions in three separate blocks. Happy faces were the most easily discriminated. Sad and angry expressions were most often mistaken for each other. Analyses of d' values, hit rates, and reaction times all yielded similar results, with no sex differences for any of the measures.  相似文献   

4.
In Experiment 1 uncued recognition of single letters presented in left or right visual fields showed no hemispheric asymmetry, but cuing by alternatives produced a left-hemisphere advantage. Uncued recognition of words was better in the right visual field (left hemisphere), and this advantage was unchanged by cuing by alternatives or cuing by class. In Experiment 2 a mixed series of words, digits, and dots was presented. Uncued trials showed no asymmetry, but when a precue indicated which type or stimulus would appear next, a left-hemisphere advantage for words was evident. Cuing also produced a nonsignificant shift toward a left-hemisphere advantage for digits and a right-hemisphere advantage for dots. The asymmetrical effects of cuing can be explained by Kinsbourne's attentional model of lateralization, which suggests that cuing may selectively activate one hemisphere, and so bias attention toward the contralateral visual field. Repetition effects within and between visual fields were analyzed but no asymmetries were found.  相似文献   

5.
冉光明  李睿  张琪 《心理科学进展》2020,28(12):1979-1988
近年来, 大量的研究对高社交焦虑者的情绪面孔加工和社交焦虑的干预进行了考察, 取得了丰富成果, 但仍存在以下不足: (1) 在以往的中国动态情绪面孔库存中, 刺激材料的情绪类别、视频维度以及视频持续时间的种类偏少; (2) 高社交焦虑者识别动态情绪面孔的神经机制未被系统探讨; (3) 注意偏向训练的效果存在争议, 即一些研究者发现注意偏向训练对社交焦虑有明显的缓解作用, 而其他研究者却未发现注意偏向训练的效果。针对这些不足, 当前项目建设的中国人动态情绪面孔库拟增加刺激材料的情绪类别、视频维度以及视频持续时间的类型, 此外运用神经科学的技术系统探究高社交焦虑者对动态情绪面孔的识别机制, 最后采用工作记忆训练改善高社交焦虑者对动态生气面孔识别的注意偏向。本研究团队提出了高社交焦虑个体识别动态情绪面孔的神经机制模型, 该模型主要包括机制和干预两个部分。本项目的开展不仅为动态情绪面孔加工和社交焦虑研究提供了新视角, 还突破原有单一的研究方法, 拟从行为、电生理和脑成像三个层次进行研究。研究成果将促进社交焦虑干预工作的开展, 从而缓解社交焦虑个体的心理健康问题, 对于提高他们的幸福感和生命质量有重要价值。  相似文献   

6.
The negative compatibility effect (NCE) is the surprising result that low-visibility prime arrows facilitate responses to opposite-direction target arrows. Here we compare the priming obtained with simple arrows to the priming of emotions when categorizing human faces, which represents a more naturalistic set of stimuli and for which there are no preexisting response biases. When inverted faces with neutral expressions were presented alongside emotional prime and target faces, only strong positive priming occurred. However, when the neutral faces were made to resemble the target faces in geometry (upright orientation), time (flashing briefly), and space (appearing in the same location), positive priming gradually weakened and became negative priming. Implications for theories of the NCE are discussed.  相似文献   

7.
8.
Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.  相似文献   

9.
Face recognition is an important mnemonic ability for infants when navigating the social world. While age-related changes in face processing abilities are relatively well documented, less is known about short-term intra-individual fluctuations in this ability. Given that sleep deprivation in adults leads to impairments in information processing, we assessed the role of prior sleep on 6-month-old infants’ (N = 17) visual recognition of faces showing three emotional expressions (neutral, sad, angry). Visual recognition was inferred by assessing novelty preferences for unfamiliar relative to familiarized faces in a visual recognition memory paradigm. In a within-subject design, infants participated once after they had recently woken up from a nap (nap condition) and once after they had been awake for an extended period of time (awake condition). Infants failed to show visual recognition for the neutral faces in either condition. Infants showed recognition for the sad and angry faces when tested in the awake condition, but not in the nap condition. This suggests that timing of prior sleep shapes how effectively infants process emotionally relevant information in their environment.  相似文献   

10.
The ability to rapidly detect facial expressions of anger and threat over other salient expressions has adaptive value across the lifespan. Although studies have demonstrated this threat superiority effect in adults, surprisingly little research has examined the development of this process over the childhood period. In this study, we examined the efficiency of children's facial processing in visual search tasks. In Experiment 1, children (N=49) aged 8 to 11 years were faster and more accurate in detecting angry target faces embedded in neutral backgrounds than vice versa, and they were slower in detecting the absence of a discrepant face among angry than among neutral faces. This search pattern was unaffected by an increase in matrix size. Faster detection of angry than neutral deviants may reflect that angry faces stand out more among neutral faces than vice versa, or that detection of neutral faces is slowed by the presence of surrounding angry distracters. When keeping the background constant in Experiment 2, children (N=35) aged 8 to 11 years were faster and more accurate in detecting angry than sad or happy target faces among neutral background faces. Moreover, children with higher levels of anxiety were quicker to find both angry and sad faces whereas low anxious children showed an advantage for angry faces only. Results suggest a threat superiority effect in processing facial expressions in young children as in adults and that increased sensitivity for negative faces may be characteristic of children with anxiety problems.  相似文献   

11.
The study aimed to determine if the memory bias for negative faces previously demonstrated in depression and dysphoria generalises from long- to short-term memory. A total of 29 dysphoric (DP) and 22 non-dysphoric (ND) participants were presented with a series of faces and asked to identify the emotion portrayed (happiness, sadness, anger, or neutral affect). Following a delay, four faces were presented (the original plus three distractors) and participants were asked to identify the target face. Half of the trials assessed memory for facial emotion, and the remaining trials examined memory for facial identity. At encoding, no group differences were apparent. At memory testing, relative to ND participants, DP participants exhibited impaired memory for all types of facial emotion and for facial identity when the faces featured happiness, anger, or neutral affect, but not sadness. DP participants exhibited impaired identity memory for happy faces relative to angry, sad, and neutral, whereas ND participants exhibited enhanced facial identity memory when faces were angry. In general, memory for faces was not related to performance at encoding. However, in DP participants only, memory for sad faces was related to sadness recognition at encoding. The results suggest that the negative memory bias for faces in dysphoria does not generalise from long- to short-term memory.  相似文献   

12.
The authors examined preadolescents' ability to recognize faces of unfamiliar peers according to their attractiveness. They hypothesized that highly attractive faces would be less accurately recognized than moderately attractive faces because the former are more typical. In Experiment 1, 106 participants (M age = 10 years) were asked to recognize faces of unknown peers who varied in gender and attractiveness (high- vs. medium-attractiveness). Results showed that attractiveness enhanced the accuracy of recognition for boys' faces and impaired recognition of girls' faces. The same interaction was found in Experiment 2, in which 92 participants (M age = 12 years) were tested for their recognition of another set of faces of unfamiliar peers. The authors conducted Experiment 3 to examine whether the reason for that interaction is that high- and medium-attractive girls' faces differ more in typicality than do boys' faces. The effect size of attractiveness on typicality was similar for boys' and girls' faces. The overall results are discussed with reference to the development of face encoding and biological gender differences with respect to the typicality of faces during preadolescence.  相似文献   

13.
In the present study, we investigated the effect of participants’ mood on true and false memories of emotional word lists in the Deese–Roediger–McDermott (DRM) paradigm. In Experiment 1, we constructed DRM word lists in which all the studied words and corresponding critical lures reflected a specified emotional valence. In Experiment 2, we used these lists to assess mood-congruent true and false memory. Participants were randomly assigned to one of three induced-mood conditions (positive, negative, or neutral) and were presented with word lists comprised of positive, negative, or neutral words. For both true and false memory, there was a mood-congruent effect in the negative mood condition; this effect was due to a decrease in true and false recognition of the positive and neutral words. These findings are consistent with both spreading-activation and fuzzy-trace theories of DRM performance and have practical implications for our understanding of the effect of mood on memory.  相似文献   

14.
Using a rapid serial visual presentation (RSVP) paradigm, response times to a previously ignored item occurring after a target were measured. Using this procedure, it was possible to plot the time course of inhibition following target selection. Results showed that post-target distractors produce negative priming for at least 270 ms after target presentation. It is suggested that stimuli presented immediately after a target may be inhibited in order to prevent temporal binding errors. The results are discussed in relation to two selective attention paradigms: negative priming and the attentional blink.  相似文献   

15.
J. B. Halberstadt and P. M. Niedenthal (2001) reported that explanations of target individuals' emotional states biased memory for their facial expressions in the direction of the explanation. The researchers argued for, but did not test, a 2-stage model of the explanation effect, such that verbal explanation increases attention to facial features at the expense of higher level featural configuration, making the faces vulnerable to conceptual reintegration in terms of available emotion categories. The current 4 experiments provided convergent evidence for the "featural shift" hypothesis by examining memory for both faces and facial features following verbal explanation. Featural attention was evidenced by verbalizers' better memory for features relative to control participants and reintegration by a weaker explanation bias for features and configurally altered faces than for whole, unaltered faces. The results have implications for emotion, attribution, language, and the interaction of implicit and explicit processing.  相似文献   

16.
Infants' recognition memory for faces   总被引:2,自引:0,他引:2  
  相似文献   

17.
The hemispheric functional lateralization of components of mental rotation performance was investigated. Twenty right-handed males were presented with rotated alphanumerics and unfamiliar characters in the left or right visual field. Subjects decided if the laterally presented stimulus was identical to or a mirror image of a center standard stimulus. Reaction time and errors were measured. Previous mental rotation findings were replicated and the visual field variable produced significant effects for both dependent measures. An overall right visual field advantage was observed in the latency data, suggesting a left hemisphere superiority for at least one component process of the task. A significant interaction in the error data showed that alphanumerics produced less errors in the right visual field than in the left visual field, consistent with a left hemisphere superiority for processing verbal symbolic material. No such hemispheric difference in accuracy was found for unfamiliar characters.  相似文献   

18.
The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the observer? The authors argue that the evidence is consistent with claims that (a) preattentive search processes are sensitive to and influenced by facial expressions of emotion, (b) attention guidance is influenced by a dynamic interplay of emotional and perceptual factors, and (c) visual search for emotional faces is influenced by the emotional state of the observer to some extent. The authors also argue that the way in which contextual factors interact to determine search performance needs to be explored further to draw sound conclusions about the precise influence of emotional expressions on search efficiency. Methodological considerations (e.g., set size, distractor background, task set) and ecological limitations of the visual search task are discussed. Finally, specific recommendations are made for future research directions.  相似文献   

19.
Aging and attentional biases for emotional faces   总被引:10,自引:1,他引:9  
We examined age differences in attention to and memory for faces expressing sadness, anger, and happiness. Participants saw a pair of faces, one emotional and one neutral, and then a dot probe that appeared in the location of one of the faces. In two experiments, older adults responded faster to the dot if it was presented on the same side as a neutral face than if it was presented on the same side as a negative face. Younger adults did not exhibit this attentional bias. Interactions of age and valence were also found for memory for the faces, with older adults remembering positive better than negative faces. These findings reveal that in their initial attention, older adults avoid negative information. This attentional bias is consistent with older adults' generally better emotional well-being and their tendency to remember negative less well than positive information.  相似文献   

20.
Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号