首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
N L Etcoff  J J Magee 《Cognition》1992,44(3):227-240
People universally recognize facial expressions of happiness, sadness, fear, anger, disgust, and perhaps, surprise, suggesting a perceptual mechanism tuned to the facial configuration displaying each emotion. Sets of drawings were generated by computer, each consisting of a series of faces differing by constant physical amounts, running from one emotional expression to another (or from one emotional expression to a neutral face). Subjects discriminated pairs of faces, then, in a separate task, categorized the emotion displayed by each. Faces within a category were discriminated more poorly than faces in different categories that differed by an equal physical amount. Thus emotional expressions, like colors and speech sounds, are perceived categorically, not as a direct reflection of their continuous physical properties.  相似文献   

2.
A series of five experiments examined the categorical perception previously found for color and facial expressions. Using a two-alternative forced-choice recognition memory paradigm, it was found that verbal interference selectively removed the defining feature of categorical perception. Under verbal interference, there was no longer the greater accuracy normally observed for cross-category judgments relative to within-category judgments. The advantage for cross-category comparisons in memory appeared to derive from verbal coding both at encoding and at storage. It thus appears that while both visual and verbal codes may be employed in the recognition memory for colors and facial expressions, subjects only made use of verbal coding when demonstrating categorical perception.  相似文献   

3.
Enhanced pitch perception and memory have been cited as evidence of a local processing bias in autism spectrum disorders (ASD). This bias is argued to account for enhanced perceptual functioning (Mottron &; Burack, 2001 Mottron, L. and Burack, J. A. 2001. “Enhanced perceptual functioning in the development of autism”. In The development of autism: Perspectives from theory to research, Edited by: Burack, J. A., Charman, T., Yirmiya, N. and Zelazo, P. R. 131147. Mahwah, NJ: Lawrence Erlbaum.  [Google Scholar]; Mottron, Dawson, Soulières, Hubert, &; Burack, 2006 Mottron, L., Dawson, M., Soulières, I., Hubert, B. and Burack, J. 2006. Enhanced perceptual functioning in autism: An update, and eight principles of autistic perception. Journal of Autism and Developmental Disorders, 36(1): 2743. doi:10.1007/s10803-005-0040-7[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) and central coherence theories of ASD (Frith, 1989 Frith, U. 1989. Autism: Explaining the enigma, Oxford: United Kingdom: Blackwell.  [Google Scholar]; Happé &; Frith, 2006 Happé, F. and Frith, U. 2006. The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1): 525. doi:10.1007/s10803-005-0039-0[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). A local processing bias confers a different cognitive style to individuals with ASD (Happé, 1999 Happé, F. 1999. Autism: Cognitive deficit or cognitive style?. Trends in Cognitive Sciences, 3(6): 216222. doi:10.1016/S1364-6613(99)01318-2[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), which accounts in part for their good visuospatial and visuoconstructive skills. Here, we present analogues in the auditory domain, audiotemporal or audioconstructive processing, which we assess using a novel experimental task: a musical puzzle. This task evaluates the ability of individuals with ASD to process temporal sequences of musical events as well as various elements of musical structure and thus indexes their ability to employ a global processing style. Musical structures created and replicated by children and adolescents with ASD (10–19 years old) and typically developing children and adolescents (7–17 years old) were found to be similar in global coherence. Presenting a musical template for reference increased accuracy equally for both groups, with performance associated to performance IQ and short-term auditory memory. The overall pattern of performance was similar for both groups; some puzzles were easier than others and this was the case for both groups. Task performance was further found to be correlated with the ability to perceive musical emotions, more so for typically developing participants. Findings are discussed in light of the empathizing-systemizing theory of ASD (Baron-Cohen, 2009 Baron-Cohen, S. 2009. Autism: The Empathizing-Systemizing (E-S) Theory. Annals of the New York Academy of Sciences, 1156: 6880. doi:10.1111/j.1749-6632.2009.04467.x[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) and the importance of describing the strengths of individuals with ASD (Happé, 1999 Happé, F. 1999. Autism: Cognitive deficit or cognitive style?. Trends in Cognitive Sciences, 3(6): 216222. doi:10.1016/S1364-6613(99)01318-2[Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Heaton, 2009 Heaton, P. 2009. Assessing musical skills in autistic children who are not savants. Philosophical Transactions of the Royal Society Biological Sciences, 364: 14431447. doi:10.1098/rstb.2008.0327[Crossref], [PubMed], [Web of Science ®] [Google Scholar]).  相似文献   

4.
Jiang C  Hamm JP  Lim VK  Kirk IJ  Yang Y 《Memory & cognition》2012,40(7):1109-1121
The degree to which cognitive resources are shared in the processing of musical pitch and lexical tones remains uncertain. Testing Mandarin amusics on their categorical perception of Mandarin lexical tones may provide insight into this issue. In the present study, a group of 15 amusic Mandarin speakers identified and discriminated Mandarin tones presented as continua in separate blocks. The tonal continua employed were from a high-level tone to a mid-rising tone and from a high-level tone to a high-falling tone. The two tonal continua were made in the contexts of natural speech and of nonlinguistic analogues. In contrast to the controls, the participants with amusia showed no improvement for discrimination pairs that crossed the classification boundary for either speech or nonlinguistic analogues, indicating a lack of categorical perception. The lack of categorical perception of Mandarin tones in the amusic group shows that the pitch deficits in amusics may be domain-general, and this suggests that the processing of musical pitch and lexical tones may share certain cognitive resources and/or processes (Patel 2003, 2008, 2012).  相似文献   

5.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers’ discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum “felt the same” or “felt different.” In the identification task, images were presented individually and participants were asked to label the emotion displayed on the face (e.g., “Does she look happy or sad?”). Results suggest that 3.5-year-olds have the same category boundary as adults. They were more likely to report that the image pairs felt “different” at the image pair that crossed the category boundary. These results suggest that 3.5-year-olds perceive happy and sad emotional facial expressions categorically as adults do. Categorizing emotional expressions is advantageous for children if it allows them to use social information faster and more efficiently.  相似文献   

6.
E Kotsoni  M de Haan  M H Johnson 《Perception》2001,30(9):1115-1125
Recent research indicates that adults show categorical perception of facial expressions of emotion. It is not known whether this is a basic characteristic of perception that is present from the earliest weeks of life, or whether it is one that emerges more gradually with experience in perceiving and interpreting expressions. We report two experiments designed to investigate whether young infants, like adults, show categorical perception of facial expressions. 7-month-old infants were shown photographic quality continua of interpolated (morphed) facial expressions derived from two prototypes of fear and happiness. In the first experiment, we used a visual-preference technique to identify the infants' category boundary between happiness and fear. In the second experiment, we used a combined familiarisation-visual-preference technique to compare infants' discrimination of pairs of expressions that were equally physically different but that did or did not cross the emotion-category boundary. The results suggest that 7-month-old infants (i) show evidence of categorical perception of facial expressions of emotion, and (ii) show persistent interest in looking at fearful expressions.  相似文献   

7.
We previously hypothesized that pubertal development shapes the emergence of new components of face processing (Scherf et al., 2012; Garcia & Scherf, 2015). Here, we evaluate this hypothesis by investigating emerging perceptual sensitivity to complex versus basic facial expressions across pubertal development. We tested pre‐pubescent children (6–8 years), age‐ and sex‐matched adolescents in early and later stages of pubertal development (11–14 years), and sexually mature adults (18–24 years). Using a perceptual staircase procedure, participants made visual discriminations of both socially complex expressions (sexual interest, contempt) that are arguably relevant to emerging peer‐oriented relationships of adolescence, and basic (happy, anger) expressions that are important even in early infancy. Only sensitivity to detect complex expressions improved as a function of pubertal development. The ability to perceive these expressions is adult‐like by late puberty when adolescents become sexually mature. This pattern of results provides the first evidence that pubertal development specifically influences emerging affective components of face perception in adolescence.  相似文献   

8.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

9.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

10.
The current study compared high-functioning children with autism (HFA) and a peer control group on an immediate arousal task measuring response inhibition. In one condition go stimuli were presented whereas in another condition a tone preceded the go stimulus. The tone caused an immediate arousal effect, which resulted in a reaction time decrease and an error rate increase. It was expected that children with HFA would produce a higher error rate in comparison with normal peers, since they might be less able to suppress immediate arousal. However, the HFA group outperformed the control group, indicating neither arousal regulation deficit nor response inhibition deficit.  相似文献   

11.
Typical adults mimic facial expressions within 1000 ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study evaluated mechanisms underlying RFRs during childhood and examined possible impairment in children with ASD. Experiment 1 found RFRs to happy and angry faces (not fear faces) in 15 typically developing children from 7 to 12 years of age. RFRs of fear (not anger) in response to angry faces indicated an emotional mechanism. In 11 children (8-13 years of age) with ASD, Experiment 2 found undifferentiated RFRs to fear expressions and no consistent RFRs to happy or angry faces. However, as children with ASD aged, matching RFRs to happy faces increased significantly, suggesting the development of processes underlying matching RFRs during this period in ASD.  相似文献   

12.
Children with autism spectrum disorder process many perceptual and social events differently from typically developing children, suggesting that they may also form and recognize categories differently. We used a dot pattern categorization task and prototype comparison modeling to compare categorical processing in children with high-functioning autism spectrum disorder and matched typical controls. We were interested in whether there were differences in how children with autism use average similarity information about a category to make decisions. During testing, the group with autism spectrum disorder endorsed prototypes less and was seemingly less sensitive to differences between to-be-categorized items and the prototype. The findings suggest that individuals with high-functioning autism spectrum disorder are less likely to use overall average similarity when forming categories or making categorical decisions. Such differences in category formation and use may negatively impact processing of socially relevant information, such as facial expressions. A supplemental appendix for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.  相似文献   

13.
Of the neurobiological models of children's and adolescents' depression, the neuropsychological one is considered here. Experimental and clinical evidence has allowed us to identify a lateralization of emotional functions from the very beginning of development, and a right hemisphere dominance for emotions is by now well-known. Many studies have also correlated depression with a right hemisphere dysfunction in patients of different ages. The aim of our study was to analyze recognition of different facial emotions by a group of depressed children and adolescents. Patients affected by Major Depressive Disorder recognized less fear in six fundamental emotions than a group of healthy controls, and Dysthymic subjects recognized less anger. The group of patients' failure to recognize negative-aroused facial expressions could indicate a subtle right hemisphere dysfunction in depressed children and adolescents.  相似文献   

14.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

15.
16.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

17.
The current study examined differences in emotion expression identification between adolescents characterised with behavioural inhibition (BI) in childhood with and without a lifetime history of anxiety disorder. Participants were originally assessed for BI during toddlerhood and for social reticence during childhood. During adolescence, participants returned to the laboratory and completed a facial emotion identification task and a clinical psychiatric interview. Results revealed that behaviorally inhibited adolescents with a lifetime history of anxiety disorder displayed a lower threshold for identifying fear relative to anger emotion expressions compared to non-anxious behaviorally inhibited adolescents and non-inhibited adolescents with or without anxiety. These findings were specific to behaviorally inhibited adolescents with a lifetime history of social anxiety disorder. Thus, adolescents with a history of both BI and anxiety, specifically social anxiety, are more likely to differ from other adolescents in their identification of fearful facial expressions. This offers further evidence that perturbations in the processing of emotional stimuli may underlie the aetiology of anxiety disorders.  相似文献   

18.
The view that certain facial expressions of emotion are universally agreed on has been challenged by studies showing that the forced-choice paradigm may have artificially forced agreement. This article addressed this methodological criticism by offering participants the opportunity to select a none of these terms are correct option from a list of emotion labels in a modified forced-choice paradigm. The results show that agreement on the emotion label for particular facial expressions is still greater than chance, that artifactual agreement on incorrect emotion labels is obviated, that participants select the none option when asked to judge a novel expression, and that adding 4 more emotion labels does not change the pattern of agreement reported in universality studies. Although the original forced-choice format may have been prone to artifactual agreement, the modified forced-choice format appears to remedy that problem.  相似文献   

19.
It is generally thought that individuals with Asperger's syndrome and high-functioning autism (AS/HFA) have deficits in theory of mind. These deficits have been previously linked to problems with social cognition. However, we reasoned that AS/HFA individuals' Theory of Mind deficits also might lead to problems with emotion regulation. To assess emotional functioning in AS/HFA, 27 AS/HFA adults (16 women) and 27 age-, gender-, and education-matched typically developing (TD) participants completed a battery of measures of emotion experience, labeling, and regulation. With respect to emotion experience, individuals with AS/HFA reported higher levels of negative emotions, but similar levels of positive emotions, compared with TD individuals. With respect to emotion labeling, individuals with AS/HFA had greater difficulties identifying and describing their emotions, with approximately two-thirds exceeding the cutoff for alexithymia. With respect to emotion regulation, individuals with AS/HFA used reappraisal less frequently than TD individuals and reported lower levels of reappraisal self-efficacy. Although AS/HFA individuals used suppression more frequently than TD individuals, no difference in suppression self-efficacy was found. It is important to note that these differences in emotion regulation were evident even when controlling for emotion experience and labeling. Implications of these deficits are discussed, and future research directions are proposed.  相似文献   

20.
Does our perception of others' emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号