首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human beings seem to be able to recognize emotions from speech very well and information communication technology aims to implement machines and agents that can do the same. However, to be able to automatically recognize affective states from speech signals, it is necessary to solve two main technological problems. The former concerns the identification of effective and efficient processing algorithms capable of capturing emotional acoustic features from speech sentences. The latter focuses on finding computational models able to classify, with an approximation as good as human listeners, a given set of emotional states. This paper will survey these topics and provide some insights for a holistic approach to the automatic analysis, recognition and synthesis of affective states.  相似文献   

2.
The recognition of nonverbal emotional signals and the integration of multimodal emotional information are essential for successful social communication among humans of any age. Whereas prior studies of age dependency in the recognition of emotion often focused on either the prosodic or the facial aspect of nonverbal signals, our purpose was to create a more naturalistic setting by presenting dynamic stimuli under three experimental conditions: auditory, visual, and audiovisual. Eighty-four healthy participants (women = 44, men = 40; age range 20-70 years) were tested for their abilities to recognize emotions either mono- or bimodally on the basis of emotional (happy, alluring, angry, disgusted) and neutral nonverbal stimuli from voice and face. Additionally, we assessed visual and auditory acuity, working memory, verbal intelligence, and emotional intelligence to explore potential explanatory effects of these population parameters on the relationship between age and emotion recognition. Applying unbiased hit rates as performance measure, we analyzed data with linear regression analyses, t tests, and with mediation analyses. We found a linear, age-related decrease in emotion recognition independent of stimulus modality and emotional category. In contrast, the improvement in recognition rates associated with audiovisual integration of bimodal stimuli seems to be maintained over the life span. The reduction in emotion recognition ability at an older age could not be sufficiently explained by age-related decreases in hearing, vision, working memory, and verbal intelligence. These findings suggest alterations in social perception at a level of complexity beyond basic perceptional and cognitive abilities.  相似文献   

3.
4.
A problem in the processing of emotions has long been thought to be strongly associated with the aetiology and maintenance of personality disorders. Previous research has demonstrated a hyposensitivity to the faces expressing fear in those high on the traits of psychopathy, while patients with BPD have been shown to be hypersensitive to expressions in general. However, many previous studies could be explained by a bias in reporting particular expressions rather than a change in sensitivity to these expressions. Using two tasks, the present study examined both the detection and the recognition of four emotional expressions (anger, happy, sad, and fear) in a community sample of males and females. Measures of self-reported psychopathy and Borderline Personality traits were administered. The results showed marked gender differences. Psychopathy was negatively related to performance in both the detection and recognition of fear, but only for males. Borderline Personality traits were positively related to overall performance in the recognitions task, but only for females. The results suggest strong differences in the role that emotional processing might play between the genders.  相似文献   

5.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

6.
The present study aims to explore the influence of facial emotional expressions on pre-scholars' identity recognition was analyzed using a two-alternative forced-choice matching task. A decrement was observed in children's performance with emotional faces compared with neutral faces, both when a happy emotional expression remained unchanged between the target face and the test faces and when the expression changed from happy to neutral or from neutral to happy between the target and the test faces (Experiment 1). Negative emotional expressions (i.e. fear and anger) also interfered with children's identity recognition (Experiment 2). Obtained evidence suggests that in preschool-age children, facial emotional expressions are processed in interaction with, rather than independently from, the encoding of facial identity information. The results are discussed in relationship with relevant research conducted with adults and children.  相似文献   

7.
Three experiments examined 3- and 5-year-olds’ recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression remained neutral (Experiment 1) or varied between immediate and delayed tests: from neutral to smile and anger (Experiment 2), from smile to neutral and anger (Experiment 3, condition 1), or from anger to neutral and smile (Experiment 3, condition 2). In all experiments, immediate face recognition was not influenced by emotional expression for either age group. Delayed face recognition was most accurate for faces in identical emotional expression. For 5-year-olds, delayed face recognition (with varied emotional expression) was not influenced by which emotional expression had been displayed during the immediate recognition test. Among 3-year-olds, accuracy decreased when facial expressions varied from neutral to smile and anger but was constant when facial expressions varied from anger or smile to neutral, smile or anger. Three-year-olds’ recognition was facilitated when faces initially displayed smile or anger expressions, but this was not the case for 5-year-olds. Results thus indicate a developmental progression in face identity recognition with varied emotional expressions between ages 3 and 5.  相似文献   

8.
Use of scores on a single test of endurance which discriminates potentially talented under-age players' performance is insufficient for prediction of later performance, but such data could be useful when considered with other test scores.  相似文献   

9.
Continua of vocal emotion expressions, ranging from one expression to another, were created using speech synthesis. Each emotion continuum consisted of expressions differing by equal physical amounts. In 2 experiments, subjects identified the emotion of each expression and discriminated between pairs of expressions. Identification results show that the continua were perceived as 2 distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. These results suggest that vocal expressions are perceived categorically. Results are interpreted from an evolutionary perspective on the function of vocal expression.  相似文献   

10.
Face recognition occurs when a face is recognised despite changes between learning and test exposures. Yet there has been relatively little research on how variations in emotional expressions influence people’s ability to recognise these changes. We evaluated the ability to discriminate old and similar expressions of emotions (i.e. mnemonic discrimination) of the same face, as well as the discrimination ability between old and dissimilar (new) expressions of the same face, reflecting traditional discrimination. An emotional mnemonic discrimination task with morphed faces that were similar but not identical to the original face was used. Results showed greater mnemonic discrimination for learned neutral expressions that at test became slightly more fearful rather than happy. For traditional discrimination, there was greater accuracy for learned happy faces becoming fearful, rather than those changing from fearful-to-happy. These findings indicate that emotional expressions may have asymmetrical influences on mnemonic and traditional discrimination of the same face.  相似文献   

11.
The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the observer? The authors argue that the evidence is consistent with claims that (a) preattentive search processes are sensitive to and influenced by facial expressions of emotion, (b) attention guidance is influenced by a dynamic interplay of emotional and perceptual factors, and (c) visual search for emotional faces is influenced by the emotional state of the observer to some extent. The authors also argue that the way in which contextual factors interact to determine search performance needs to be explored further to draw sound conclusions about the precise influence of emotional expressions on search efficiency. Methodological considerations (e.g., set size, distractor background, task set) and ecological limitations of the visual search task are discussed. Finally, specific recommendations are made for future research directions.  相似文献   

12.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

13.
Upright and inverted faces were used to determine whether 7-month-old infants discriminate emotional expressions on the basis of affectively relevant information. In Experiment 1, infants recognized the similarity of happy faces over changing identities and discriminated this expression from fear and anger when the stimuli were presented upright, but not when they were inverted. In Experiment 2, infants were able to discriminate happiness from fear and anger posed by a single model, regardless of the orientation of the stimuli. From these studies it was suggested that categorizing emotional expressions depends upon attending to affectively relevant, orientation-specific information, whereas the discrimination of emotional expressions can be done on a featural basis, something that remains invariant regardless of the orientation of the stimuli. In Experiment 3, infants discriminated toothy happiness posed by several models from nontoothy happiness and nontoothy anger when the stimuli were presented upright and inverted. Thus, when salient features were available, the infants based their discriminations on perceptual aspects rather than on conceptual aspects such as categories of emotions.  相似文献   

14.
Similar samples of English, Italian and Japanese subjects were asked to identify 8 emotional states and 4 interpersonal attitudes from video-taped expressions of 2 performers from each of these cultures. AN sets of judgements were above chance, except Italians judging Japanese. The Japanese subjects were no different from English and Italian subjects in recognition ability but the Japanese performances were harder to recognize supporting Ekman's theory of display rules; in fact all Japanese expressions were difficult to recognize, with the exception of happy-friendly. The Japanese (performers) make a clearer distinction between sad and depressed than other cultural groups, but did not distinguish between happy and friendly, or between angry and hostile.  相似文献   

15.
Traditional theories of career development are inadequate due to the complexity of today's work world. As changes occur in society and the world‐of‐work employment counselors need new ways of dealing with their clients. This article introduces employment counselors to the “leisure theory of career development” (J. J. Liptak, 2000) as an alternative to traditional trait‐and‐factor counseling approaches.  相似文献   

16.
An extensive literature credits the right hemisphere with dominance for processing emotion. Conflicting literature finds left hemisphere dominance for positive emotions. This conflict may be resolved by attending to processing stage. A divided output (bimanual) reaction time paradigm in which response hand was varied for emotion (angry; happy) in Experiments 1 and 2 and for gender (male; female) in Experiment 3 focused on response to emotion rather than perception. In Experiments 1 and 2, reaction time was shorter when right-hand responses indicated a happy face and left-hand responses an angry face, as compared to reversed assignment. This dissociation did not obtain with incidental emotion (Experiment 3). Results support the view that response preparation to positive emotional stimuli is left lateralized.  相似文献   

17.
18.
To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu@mail.mcgill.ca.  相似文献   

19.
Vocal perception is particularly important for understanding a speaker's emotional state and intentions because, unlike facial perception, it is relatively independent of speaker distance and viewing conditions. The idea, derived from brain lesion studies, that vocal emotional comprehension is a special domain of the right hemisphere has failed to receive consistent support from neuroimaging. This conflict can be reconciled if vocal emotional comprehension is viewed as a multi-step process with individual neural representations. This view reveals a processing chain that proceeds from the ventral auditory pathway to brain structures implicated in cognition and emotion. Thus, vocal emotional comprehension appears to be mediated by bilateral mechanisms anchored within sensory, cognitive and emotional processing systems.  相似文献   

20.
Which brain regions are associated with recognition of emotional prosody? Are these distinct from those for recognition of facial expression? These issues were investigated by mapping the overlaps of co-registered lesions from 66 brain-damaged participants as a function of their performance in rating basic emotions. It was found that recognizing emotions from prosody draws on the right frontoparietal operculum, the bilateral frontal pole, and the left frontal operculum. Recognizing emotions from prosody and facial expressions draws on the right frontoparietal cortex, which may be important in reconstructing aspects of the emotion signaled by the stimulus. Furthermore, there were regions in the left and right temporal lobes that contributed disproportionately to recognition of emotion from faces or prosody, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号