首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nonverbal learning disability is a childhood disorder with basic neuropsychological deficits in visuospatial processing and psychomotor coordination, and secondary impairments in academic and social-emotional functioning. This study examines emotion recognition, understanding, and regulation in a clinic-referred group of young children with nonverbal learning disabilities (NLD). These processes have been shown to be related to social competence and psychological adjustment in typically developing (TD) children. Psychosocial adjustment and social skills are also examined for this young group, and for a clinic-referred group of older children with NLD. The young children with NLD scored lower than the TD comparison group on tasks assessing recognition of happy and sad facial expressions and tasks assessing understanding of how emotions work. Children with NLD were also rated as having less adaptive regulation of their emotions. For both young and older children with NLD, internalizing and externalizing problem scales were rated higher than for the TD comparison groups, and the means of the internalizing, attention, and social problem scales were found to fall within clinically concerning ranges. Measures of attention and nonverbal intelligence did not account for the relationship between NLD and Social Problems. Social skills and NLD membership share mostly overlapping variance in accounting for internalizing problems across the sample. The results are discussed within a framework wherein social cognitive deficits, including emotion processes, have a negative impact on social competence, leading to clinically concerning levels of depression and withdrawal in this population.  相似文献   

2.
In daily experience, children have access to a variety of cues to others’ emotions, including face, voice, and body posture. Determining which cues they use at which ages will help to reveal how the ability to recognize emotions develops. For happiness, sadness, anger, and fear, preschoolers (3-5 years, N = 144) were asked to label the emotion conveyed by dynamic cues in four cue conditions. The Face-only, Body Posture-only, and Multi-cue (face, body, and voice) conditions all were well recognized (M > 70%). In the Voice-only condition, recognition of sadness was high (72%), but recognition of the three other emotions was significantly lower (34%).  相似文献   

3.
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.  相似文献   

4.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

5.
Subjects ("senders") encoded six emotions twice, first via facial expressions and second via tone of voice. These expressions were recorded and presented for decoding to the senders and an additional group of judges. Results were as follows: (a) the ability to encode and the ability to decode both visual and auditory cues were significantly related; (b) the relationship between encoding and decoding cues of the same emotion appeared low or negative; (c) the ability to decode visual cues was significantly related to the ability to decode auditory cues, but the correlations among encoding (and decoding) scores on different emotions were low; (d) females were slightly better encoders, and significantly better decoders, than males; (e) acquaintance between sender and judge improved decoding scores among males but not among females; (f) auditory decoding scores were higher than visual decoding scores, particularly among males; (g) auditory decoding scores were relatively high if sender and judge were of the same sex, while visual decoding scores were relatively high if sender and judge were of opposite sexes; (h) decoding scores varied according to channel of communication and type of emotion transmitted.  相似文献   

6.
Continua of vocal emotion expressions, ranging from one expression to another, were created using speech synthesis. Each emotion continuum consisted of expressions differing by equal physical amounts. In 2 experiments, subjects identified the emotion of each expression and discriminated between pairs of expressions. Identification results show that the continua were perceived as 2 distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. These results suggest that vocal expressions are perceived categorically. Results are interpreted from an evolutionary perspective on the function of vocal expression.  相似文献   

7.
The goal of this research was to investigate the impact of nonverbal expressive cues on the attribution of the Big Five personality traits. Expressive cues of fear, disgust, happiness, and sadness were elicited in a sample of 22 encoders while watching films, narrating, and posing. Encoders’ personalities were rated by themselves and unacquainted raters who watched the encoders, and blind judges rated the traits of a typical student. Expressive cues influenced the raters’ attribution of personality, but this influence was weakest when the encoders expressed happiness (vs. negative emotions) and when they were narrating an emotional experience (when the cues were least potent). Negative and strong expressive cues interfered with the application of a normative, and more accurate, judgment strategy.  相似文献   

8.
Normal observers demonstrate a bias to process the left sides of faces during perceptual judgments about identity or emotion. This effect suggests a right cerebral hemisphere processing bias. To test the role of the right hemisphere and the involvement of configural processing underlying this effect, young and older control observers and patients with right hemisphere damage completed two chimeric faces tasks (emotion judgment and face identity matching) with both upright and inverted faces. For control observers, the emotion judgment task elicited a strong left-sided perceptual bias that was reduced in young controls and eliminated in older controls by face inversion. Right hemisphere damage reversed the bias, suggesting the right hemisphere was dominant for this task, but that the left hemisphere could be flexibly recruited when right hemisphere mechanisms are not available or dominant. In contrast, face identity judgments were associated most clearly with a vertical bias favouring the uppermost stimuli that was eliminated by face inversion and right hemisphere lesions. The results suggest these tasks involve different neurocognitive mechanisms. The role of the right hemisphere and ventral cortical stream involvement with configural processes in face processing is discussed.  相似文献   

9.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

10.
Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.  相似文献   

11.
We examined what determines the typicality, or graded structure, of vocal emotion expressions. Separate groups of judges rated acted and spontaneous expressions of anger, fear, and joy with regard to their typicality and three main determinants of the graded structure of categories: category members' similarity to the central tendency of their category (CT); category members' frequency of instantiation, i.e., how often they are encountered as category members (FI); and category members' similarity to ideals associated with the goals served by its category, i.e., suitability to express particular emotions. Partial correlations and multiple regression analysis revealed that similarity to ideals, rather than CT or FI, explained most variance in judged typicality. Results thus suggest that vocal emotion expressions constitute ideal-based goal-derived categories, rather than taxonomic categories based on CT and FI. This could explain how prototypical expressions can be acoustically distinct and highly recognisable but occur relatively rarely in everyday speech.  相似文献   

12.
A growing body of research suggests that pride and shame are associated with distinct, cross-culturally recognised nonverbal expressions, which are spontaneously displayed in situations of success and failure, respectively. Here, we review these findings, then offer a theoretical account of the adaptive benefits of these displays. We argue that both pride and shame expressions function as social signals that benefit both observers and expressers. Specifically, pride displays function to signal high status, which benefits displayers by according them deference from others, and benefits observers by affording them valuable information about social-learning opportunities. Shame displays function to appease others after a social transgression, which benefits displayers by allowing them to avoid punishment and negative appraisals, and observers by easing their identification of committed group members and followers.  相似文献   

13.
Previous research has highlighted theoretical and empirical links between measures of both personality and trait emotional intelligence (EI), and the ability to decode facial expressions of emotion. Research has also found that the posed, static characteristics of the photographic stimuli used to explore these links affects the decoding process and differentiates them from the natural expressions they represent. This undermines the ecological validity of established trait-emotion decoding relationships.This study addresses these methodological shortcomings by testing relationships between the reliability of participant ratings of dynamic, spontaneously elicited expressions of emotion with personality and trait EI. Fifty participants completed personality and self-report EI questionnaires, and used a computer-logging program to continuously rate change in emotional intensity expressed in video clips. Each clip was rated twice to obtain an intra-rater reliability score. The results provide limited support for links between both trait EI and personality variables and how reliably we decode natural expressions of emotion. Limitations and future directions are discussed.  相似文献   

14.
In two studies, the robustness of anger recognition of bodily expressions is tested. In the first study, video recordings of an actor expressing four distinct emotions (anger, despair, fear, and joy) were structurally manipulated as to image impairment and body segmentation. The results show that anger recognition is more robust than other emotions to image impairment and to body segmentation. Moreover, the study showed that arms expressing anger were more robustly recognised than arms expressing other emotions. Study 2 added face blurring as a variable to the bodily expressions and showed that it decreased accurate emotion recognition—but more for recognition of joy and despair than for anger and fear. In sum, the paper indicates the robustness of anger recognition in multileveled deteriorated bodily expressions.  相似文献   

15.
Subjects' facial expressions were videotaped without their knowledge while they watched two pleasant and two unpleasant videotaped scenes (spontaneous facial encoding). Later, subjects' voices were audiotaped while describing their reactions to the scenes (vocal encoding). Finally, subjects were videotaped with their knowledge while they posed appropriate facial expressions to the scenes (posed facial encoding). The videotaped expressions were presented for decoding to the same subjects. The vocal material, both the original version and an electronically filtered version, was rated by judges other than the original senders. Results were as follows: (a) accuracy of vocal encoding (measured by ratings of both the filtered and unfiltered versions) was positively related to accuracy of facial encoding; (b) posing increased the accuracy of facial communication, particularly for more pleasant affects and less intense affects; (c) encoding of posed cues was correlated with encoding of spontaneous cues and decoding of posed cues was correlated with decoding of spontaneous cues; (d) correlations, within encoding and decoding, of similar scenes were positive while those among dissimilar scenes were low or negative; (e) while correlations between total encoding and total decoding were positive and low, correlations between encoding and decoding of the same scene were negative; (f) there were sex differences in decoding ability and in the relationships of personality variables with encoding and decoding of facial cues.  相似文献   

16.
The present study examines how toddler emotions may influence their own or their parents’ participation in parent-toddler verbal conversation. Limited, indirect evidence suggests that toddler positive emotions may encourage, whereas negative emotions may disrupt, parent-toddler verbal exchanges, but these hypotheses have not been tested directly. We investigated two aspects of toddler emotions– their emotion expressions and their emotional traits– and examined their relations with parent-toddler verbal conversation engagement. In a sample of families with 18-month-olds (N = 120), we used live, unstructured home observations of toddler emotion expressions and spontaneous parent-toddler verbalizations, and collected parent ratings of toddler temperament. We found that less surgent toddlers who expressed more frequent negative emotion attempted fewer verbalizations. Among all toddlers, those expressing positive emotion received more frequent parent verbal responses, and, unexpectedly, more failed parent attempts to engage their toddler in conversation. Parent-initiated conversation was unrelated to toddler emotion expressions or emotional traits. We discuss how best to integrate the study of early emotional and language development from a transactional perspective.  相似文献   

17.
18.
ABSTRACT

The paper sketches the historical development from emotion as a mysterious entity and the source of maladaptive behaviour, to emotion as a collection of ingredients and the source of also adaptive behaviour. We argue, however, that the underlying mechanism proposed to take care of this adaptive behaviour is not entirely up for its task. We outline an alternative view that explains so-called emotional behaviour with the same mechanism as non-emotional behaviour, but that is at the same time more likely to produce adaptive behaviour. The phenomena that were initially seen as requiring a separate emotional mechanism to influence and cause behaviour can also be explained by a goal-directed mechanism provided that more goals and other complexities inherent in the goal-directed process are taken into account.  相似文献   

19.
The authors investigated the ability of children with emotional and behavioral difficulties, divided according to their Psychopathy Screening Device scores (P. J. Frick & R. D. Hare, in press), to recognize emotional facial expressions and vocal tones. The Psychopathy Screening Device indexes a behavioral syndrome with two dimensions: affective disturbance and impulsive and conduct problems. Nine children with psychopathic tendencies and 9 comparison children were presented with 2 facial expression and 2 vocal tone subtests from the Diagnostic Analysis of Nonverbal Accuracy (S. Nowicki & M. P. Duke, 1994). These subtests measure the ability to name sad, fearful, happy, and angry facial expressions and vocal affects. The children with psychopathic tendencies showed selective impairments in the recognition of both sad and fearful facial expressions and sad vocal tone. In contrast, the two groups did not differ in their recognition of happy or angry facial expressions or fearful, happy, and angry vocal tones. The results are interpreted with reference to the suggestion that the development of psychopathic tendencies may reflect early amygdala dysfunction (R. J. R. Blair, J. S. Morris, C. D. Frith, D. I. Perrett, & R. Dolan, 1999).  相似文献   

20.
Valence-specific laterality effects have been frequently obtained in facial emotion perception but not in vocal emotion perception. We report a dichotic listening study further examining whether valence-specific laterality effects generalise to vocal emotions. Based on previous literature, we tested whether valence-specific laterality effects were dependent on blocked presentation of the emotion conditions, on the naturalness of the emotional stimuli, or on listener sex. We presented happy and sad sentences, paired with neutral counterparts, dichotically in an emotion localisation task, with vocal stimuli being preceded by verbal labels indicating target emotions. The measure was accuracy. When stimuli of the same emotion were presented as a block, a valence-specific laterality effect was demonstrated, but only in original stimuli and not morphed stimuli. There was a separate interaction with listener sex. We interpret our findings as suggesting that the valence-specific laterality hypothesis is supported only in certain circumstances. We discuss modulating factors, and we consider whether the mechanisms underlying those factors may be attentional or experiential in nature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号