首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In two experiments, we examined the effects of Stroop interference on the categorical perception (CP; better cross-category than within-category discrimination) of color. Using a successive two-alternative forced choice recognition paradigm (deciding which of two stimuli was identical to a previously presented target), which combined to-be-remembered colors with congruent and incongruent Stroop words, we found that congruent color words facilitated CP, whereas incongruent color words reduced CP. However, this was the case only when Stroop interference was presented together with the target color, but not when Stroop stimuli were introduced at the test stage. This suggests that target name, but not test name generation, affects CP. Target name generation may be important for CP because it acts as a category prime, which, in turn, facilitates cross-category discrimination.  相似文献   

2.
Categorical perception of facial expressions is studied in high-functioning adolescents with autism, using three continua of facial expressions obtained by morphing. In contrast to the results of normal adults, the performance on the identification task in autistic subjects did not predict performance on the discrimination task, an indication that autistic individuals do not perceive facial expressions categorically. Performance of autistic subjects with low social intelligence was more impaired than that of subjects with higher social IQ scores on the expression recognition of unmanipulated photographs. It is suggested that autistic subjects with higher social intelligence may use compensatory strategies that they have acquired in social training programs This may camouflage the deficits of this subgroup in the perception of facial expressions.  相似文献   

3.
以往研究发现眼睛注视方向知觉受面孔表情的影响,愤怒面孔相较于恐惧面孔更倾向被判断为看着观察者。虽然研究者对此提出了不同的解释,但目前尚不清楚愤怒和恐惧表情在注视方向知觉中的这种差异影响到底来自于面孔的结构信息还是物理特征信息。本研究采用注视方向辨别任务,计算直视知觉范围(The Cone of Direct Gaze,CoDG)为因变量,分别以直立,倒置及模糊图片为实验材料,试图通过分离面孔结构信息和物理特征信息,对以上问题进行探讨。结果发现在保留面孔全部信息的情况下(实验1)愤怒面孔的CoDG大于恐惧面孔;在破坏结构信息加工,只保留特征信息加工的情况下(实验2))愤怒和恐惧表情在直视知觉范围上的差异消失了;在削弱物理特征信息加工,保留结构信息加工的情况下(实验3)二者在CoDG上的差异又复现。本研究结果说明不同威胁性面孔表情对眼睛注视知觉的影响主要来自于二者在与情绪意义相关的结构信息加工上的不同,而二者非低级的物理信息上的差异,支持信号共享假说和情绪评价假说对威胁性面孔表情与注视方向整合加工解释的理论基础。  相似文献   

4.
N L Etcoff  J J Magee 《Cognition》1992,44(3):227-240
People universally recognize facial expressions of happiness, sadness, fear, anger, disgust, and perhaps, surprise, suggesting a perceptual mechanism tuned to the facial configuration displaying each emotion. Sets of drawings were generated by computer, each consisting of a series of faces differing by constant physical amounts, running from one emotional expression to another (or from one emotional expression to a neutral face). Subjects discriminated pairs of faces, then, in a separate task, categorized the emotion displayed by each. Faces within a category were discriminated more poorly than faces in different categories that differed by an equal physical amount. Thus emotional expressions, like colors and speech sounds, are perceived categorically, not as a direct reflection of their continuous physical properties.  相似文献   

5.
The present study aims to explore the influence of facial emotional expressions on pre-scholars' identity recognition was analyzed using a two-alternative forced-choice matching task. A decrement was observed in children's performance with emotional faces compared with neutral faces, both when a happy emotional expression remained unchanged between the target face and the test faces and when the expression changed from happy to neutral or from neutral to happy between the target and the test faces (Experiment 1). Negative emotional expressions (i.e. fear and anger) also interfered with children's identity recognition (Experiment 2). Obtained evidence suggests that in preschool-age children, facial emotional expressions are processed in interaction with, rather than independently from, the encoding of facial identity information. The results are discussed in relationship with relevant research conducted with adults and children.  相似文献   

6.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

7.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

8.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

9.
The view that certain facial expressions of emotion are universally agreed on has been challenged by studies showing that the forced-choice paradigm may have artificially forced agreement. This article addressed this methodological criticism by offering participants the opportunity to select a none of these terms are correct option from a list of emotion labels in a modified forced-choice paradigm. The results show that agreement on the emotion label for particular facial expressions is still greater than chance, that artifactual agreement on incorrect emotion labels is obviated, that participants select the none option when asked to judge a novel expression, and that adding 4 more emotion labels does not change the pattern of agreement reported in universality studies. Although the original forced-choice format may have been prone to artifactual agreement, the modified forced-choice format appears to remedy that problem.  相似文献   

10.
11.
We previously hypothesized that pubertal development shapes the emergence of new components of face processing (Scherf et al., 2012; Garcia & Scherf, 2015). Here, we evaluate this hypothesis by investigating emerging perceptual sensitivity to complex versus basic facial expressions across pubertal development. We tested pre‐pubescent children (6–8 years), age‐ and sex‐matched adolescents in early and later stages of pubertal development (11–14 years), and sexually mature adults (18–24 years). Using a perceptual staircase procedure, participants made visual discriminations of both socially complex expressions (sexual interest, contempt) that are arguably relevant to emerging peer‐oriented relationships of adolescence, and basic (happy, anger) expressions that are important even in early infancy. Only sensitivity to detect complex expressions improved as a function of pubertal development. The ability to perceive these expressions is adult‐like by late puberty when adolescents become sexually mature. This pattern of results provides the first evidence that pubertal development specifically influences emerging affective components of face perception in adolescence.  相似文献   

12.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers’ discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum “felt the same” or “felt different.” In the identification task, images were presented individually and participants were asked to label the emotion displayed on the face (e.g., “Does she look happy or sad?”). Results suggest that 3.5-year-olds have the same category boundary as adults. They were more likely to report that the image pairs felt “different” at the image pair that crossed the category boundary. These results suggest that 3.5-year-olds perceive happy and sad emotional facial expressions categorically as adults do. Categorizing emotional expressions is advantageous for children if it allows them to use social information faster and more efficiently.  相似文献   

13.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

14.
Three experiments were conducted to examine how verbal context and sensory stimulation interact to influence odor hedonic perception. Eight common odors were presented in their natural and synthetic forms, and verbal labels designating name and source (natural, synthetic) information were either explicitly given, self-generated, falsely provided, or not provided. Results revealed that verbal information about source influenced hedonic ratings whether or not the odorant itself was also present. When odorants were presented without verbal labels, olfactory evaluations were based in sensation. Name and source information contributed different levels of meaning and influence to perceptual evaluations. The findings are discussed with reference to an experiential-collocation model for odor-label interactions and a dual-coding hypothesis for olfactory perception.  相似文献   

15.
There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n = 17), left brain-damaged (LBD; n = 17), and healthy control (HC; n = 34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate than the HC group, however RBD and LBD group performance did not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study’s ability to find hemispheric differences in the perception of facial emotion.  相似文献   

16.
Studies of speech perception first revealed a surprising discontinuity in the way in which stimulus values on a physical continuum are perceived. Data which demonstrate the effect in nonspeech modes have challenged the contention that categorical perception is a hallmark of the speech mode, but the psychophysical models that have been proposed have not resolved the issues raised by empirical findings. This study provides data from judgments of four sensory continua, two visual and two tactual-kinesthetic, which show that the adaptation level for a set of stimuli serves as a category boundary whether stimuli on the continuum differ by linear or logarithmic increments. For all sensory continua studied, discrimination of stimuli belonging to different perceptual categories was more accurate than discrimination of stimuli belonging to the same perceptual category. Moreover, shifts in the adaptation level produced shifts in the location of the category boundary. The concept of Adaptation-level Based Categorization (ABC) provides a unified account of judgmental processes in categorical perception without recourse to post hoc constructs such as implicit anchors or external referents.  相似文献   

17.
We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour–emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour–emotion combinations, but revealed only marginal interference effects most likely due to missing colour–emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.  相似文献   

18.
We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.  相似文献   

19.
Recent studies have shown that the perception of facial expressions of emotion fits the criteria of categorical perception (CP). The present paper tests whether a pattern of categories emerges when facial expressions are examined within the framework of multidimensional scaling. Blends of five “pure” expressions (Angry, Sad, Surprised, Happy, Neutral) were created using computerised “morphing”, providing the stimuli for four experiments. Instead of attempting to identify these stimuli, subjects described the proximities between them, using two quite different forms of data: similarity comparisons, and sorting partitions. Multidimensional scaling techniques were applied to integrate the resulting ordinal-level data into models which represent the interstimulus similarities at ratio level. All four experiments yielded strong evidence that the expressions were perceived in distinct categories. Adjacent pairs in the models were not spaced at equal intervals, but were clustered together as if drawn towards a “perceptual magnet”; within each category. We argue that spatial representations are compatible with CP effects, and indeed are a useful tool for investigating them.  相似文献   

20.
A rapid response to a threatening face in a crowd is important to successfully interact in social environments. Visual search tasks have been employed to determine whether there is a processing advantage for detecting an angry face in a crowd, compared to a happy face. The empirical findings supporting the “anger superiority effect” (ASE), however, have been criticized on the basis of possible low-level visual confounds and because of the limited ecological validity of the stimuli. Moreover, a “happiness superiority effect” is usually found with more realistic stimuli. In the present study, we tested the ASE by using dynamic (and static) images of realistic human faces, with validated emotional expressions having similar intensities, after controlling the bottom-up visual saliency and the amount of image motion. In five experiments, we found strong evidence for an ASE when using dynamic displays of facial expressions, but not when the emotions were expressed by static face images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号