首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Emotional cues contain important information about the intentions and feelings of others. Despite a wealth of research into children's understanding of facial signals of emotions, little research has investigated the developmental trajectory of interpreting affective cues in the voice. In this study, 48 children ranging between 5 and 10 years were tested using forced‐choice tasks with non‐verbal vocalizations and emotionally inflected speech expressing different positive, neutral and negative states. Children as young as 5 years were proficient in interpreting a range of emotional cues from vocal signals. Consistent with previous work, performance was found to improve with age. Furthermore, the two tasks, examining recognition of non‐verbal vocalizations and emotionally inflected speech, respectively, were sensitive to individual differences, with high correspondence of performance across the tasks. From this demonstration of children's ability to recognize emotions from vocal stimuli, we also conclude that this auditory emotion recognition task is suitable for a wide age range of children, providing a novel, empirical way to investigate children's affect recognition skills.  相似文献   

2.
Research on cross-modal performance in nonhuman primates is limited to a small number of sensory modalities and testing methods. To broaden the scope of this research, the authors tested capuchin monkeys (Cebus apella) for a seldom-studied cross-modal capacity in nonhuman primates, auditory-visual recognition. Monkeys were simultaneously played 2 video recordings of a face producing different vocalizations and a sound recording of 1 of the vocalizations. Stimulus sets varied from naturally occurring conspecific vocalizations to experimentally controlled human speech stimuli. The authors found that monkeys preferred to view face recordings that matched presented vocal stimuli. Their preference did not differ significantly across stimulus species or other stimulus features. However, the reliability of the latter set of results may have been limited by sample size. From these results, the authors concluded that capuchin monkeys exhibit auditory-visual cross-modal perception of conspecific vocalizations.  相似文献   

3.
The purpose of this study was to compare the recognition performance of children who identified facial expressions of emotions using adults' and children's stimuli. The subjects were 60 children equally distributed in six subgroups as a function of sex and three age levels: 5, 7, and 9 years. They had to identify the emotion that was expressed in 48 stimuli (24 adults' and 24 children's expressions) illustrating six emotions: happiness, surprise, fear, disgust, anger, and sadness. The task of the children consisted of selecting the facial stimulus that best matched a short story that clearly described an emotional situation. The results indicated that recognition performances were significantly affected by the age of the subjects: 5-year-olds were less accurate than 7- and 9-year-olds who did not differ among themselves. There were also differences in recognition levels between emotions. No effects related to the sex of the subjects and to the age of the facial stimuli were observed.  相似文献   

4.
Pell MD 《Brain and cognition》2002,48(2-3):499-504
This report describes some preliminary attributes of stimuli developed for future evaluation of nonverbal emotion in neurological populations with acquired communication impairments. Facial and vocal exemplars of six target emotions were elicited from four male and four female encoders and then prejudged by 10 young decoders to establish the category membership of each item at an acceptable consensus level. Representative stimuli were then presented to 16 additional decoders to gather indices of how category membership and encoder gender influenced recognition accuracy of emotional meanings in each nonverbal channel. Initial findings pointed to greater facility in recognizing target emotions from facial than vocal stimuli overall and revealed significant accuracy differences among the six emotions in both the vocal and facial channels. The gender of the encoder portraying emotional expressions was also a significant factor in how well decoders recognized specific emotions (disgust, neutral), but only in the facial condition.  相似文献   

5.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

6.
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each vocalization, for use by the research community. High recognition accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.  相似文献   

7.
Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non‐linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4‐ to 11‐year‐olds and adults. Eighty‐eight 4‐ to 11‐year‐olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non‐linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4‐ to 5‐, 6‐ to 9‐ and 10‐ to 11‐year‐olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio‐emotional competence.  相似文献   

8.
Facial emotion-recognition difficulties have been reported in school-aged children with behavior problems; little is known, however, about either this association in preschool children or with regard to vocal emotion recognition. The current study explored the association between facial and vocal emotion recognition and behavior problems in a sample of 3 to 6-year-old children. A sample of 57 children enriched for risk of behavior problems (41 were recruited from the general population while 16 had been referred for behavior problems to local clinics) were each presented with a series of vocal and facial stimuli expressing different emotions (i.e., angry, happy, and sad) of low and high intensity. Parents rated children’s externalizing and internalizing behavior problems. Vocal and facial emotion recognition accuracy was negatively correlated with externalizing but not internalizing behavior problems independent of emotion type. The effects with the externalizing domain were independently associated with hyperactivity rather than conduct problems. The results highlight the importance of using vocal as well as facial stimuli when studying the relationship between emotion-recognition and behavior problems. Future studies should test the hypothesis that difficulties in responding to adult instructions and commands seen in children with attention deficit/hyperactivity disorder (ADHD) may be due to deficits in the processing of vocal emotions.  相似文献   

9.
Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.  相似文献   

10.
Previous research indicates that the specific emotions expressed by stimuli may be closely associated with their pleasing and arousing qualities, and this parallels psychomusicological research on the relationship between these two stimulus qualities. In light of this, the present research contends that the emotions expressed by musical stimuli are associated with their pleasing and arousing qualities. Sixty subjects rated 32 musical excerpts on 11-point scales representing the expression of eight specific emotions. Statistical analyses showed that these emotion ratings were predictable on the basis of 60 additional subjects' ratings of each excerpt in terms of 'liking' and 'arousal potential'. This indicates that ratings of liking and arousal potential are essentially similar to ratings of the specific emotions expressed by musical stimuli. These results are discussed in terms of the relationship between liking and arousal potential, and the implications that this may have for research on affective processes.  相似文献   

11.
This study investigated children's understanding of emotion in dance movements. Professional dancers were instructed to improvise on the emotions of joy, anger, fear, and sadness and to transform these improvisations into short solo dances, which were recorded on video. Eight performances were selected for use as stimuli. Children, aged 4, 5, and 8 years, and adults watched these performances and indicated which of the four emotions they perceived in the respective performance. All age groups achieved recognition scores well above chance level. As a rule, 4-year-olds' recognition was inferior to that of the other age groups, but in some cases either girls or boys of this age achieved as good a recognition as one or more of the other age groups. The 5-year-old children achieved recognition levels close to those obtained for 8-year-olds and adults. A cue analysis based on the Laban movement analysis suggested that force and tempo in movement were the key factors for emotion recognition.  相似文献   

12.
音乐情绪识别能力是利用音乐开展情绪调节的基本条件。传统的以五声音阶为基础的具有独特韵味的中国民族音乐反映了中国人独有的情感和价值观念, 在情绪调节和音乐治疗方面具有积极的作用, 是研究音乐情绪识别的有效音乐刺激。本研究采用跨通道情绪启动范式, 通过人际反应指针问卷筛选出高、低共情组被试各36人参加脑电实验, 考察共情能力对中国民族音乐情绪识别的影响。脑电数据显示, 在进行中国民族音乐情绪内隐识别时, 将宫调和羽调音乐作为启动刺激, 诱发了中期的P2、N400以及晚期正成分LPC (Late Positive Component)。低共情组P2和N400成分的波幅大于高共情组, 高共情组LPC成分的波幅大于低共情组。本研究第一次从电生理层面考察了不同共情能力的个体在进行中国民族音乐情绪识别时的神经反应差异。高低共情组在中国民族音乐情绪识别不同阶段的注意投入可能影响了其对音乐刺激的感受, 进而影响音乐情绪识别。  相似文献   

13.
This study reports on co‐occurrence of vocal behaviors and motor actions in infants in the prelinguistic stage. Four Japanese infants were studied longitudinally from the age of 6 months to 11 months. For all the infants, a 40 min sample was coded for each monthly period. The vocalizations produced by the infants co‐occurred with their rhythmic actions with high frequency, particularly in the period preceding the onset of canonical babbling. Acoustical analysis was conducted on the vocalizations recorded before and after the period when co‐occurrence took place most frequently. Among the vocalizations recorded in the period when co‐occurrence appeared most frequently, those that co‐occurred with rhythmic action had significantly shorter syllable duration and shorter formant‐frequency transition duration compared with those that did not co‐occur with rhythmic action. The rapid transitions and short syllables were similar to patterns of duration found in mature speech. The acoustic features remained even after co‐occurrence disappeared. These findings suggest that co‐occurrence of rhythmic action and vocal behavior may contribute to the infant’s acquisition of the ability to perform the rapid glottal and articulatory movements that are indispensable for spoken language acquisition.  相似文献   

14.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

15.
Twenty subjects were tested on their ability to recognize simple tunes from which rhythm information had been removed. Only the first phrase of each tune was presented. The purpose of the experiment was (a) to determine whether stimuli containing only high harmonics can evoke a sense of musical pitch, and (b) to provide a set of data in normal subjects with which the performance of deaf subjects whose auditory nerve is stimulated electrically can be compared. Each subject was tested on five sets of stimuli presented in a counterbalanced order. These stimuli were (I) pulse trains high-pass filtered at 2 kHz, with repetition rates in the range of 100-200 p.p.s.; (2) as in (I) but high-pass filtered at 4 kHz; (3) sinusoids with musical intervals compressed, so that the “octave” was a ratio of I:I·3; (4) sinusoids with the musical intervals expanded, so that the “octave” was a ratio of I:4; (5) sinusoids of a constant frequency in which the normal frequency changes were translated into intensity changes, each semitone being represented by a 3 dB change in level. The results indicate that a pattern of intensity changes does not support tune recognition, and that, although the pitch contour alone allows reasonable performance, subjects do use musical interval information in recognizing tunes. Stimuli containing only high harmonics can provide such interval information, and thus can evoke a sense of musical pitch. Preliminary results from a deaf subject stimulated electrically with an electrode on the surface of the cochlea indicate that such stimulation can also evoke a sense of musical pitch. It is concluded that musical pitch information can be carried in the time-pattern of nerve impulses in the auditory nerve.  相似文献   

16.
Infants’ prelinguistic vocalizations reliably organize vocal turn-taking with social partners, creating opportunities for learning to produce the sound patterns of the ambient language. This social feedback loop supporting early vocal learning is well-documented, but its developmental origins have yet to be addressed. When do infants learn that their non-cry vocalizations influence others? To test developmental changes in infant vocal learning, we assessed the vocalizations of 2- and 5-month-old infants in a still-face interaction with an unfamiliar adult. During the still-face, infants who have learned the social efficacy of vocalizing increase their babbling rate. In addition, to assess the expectations for social responsiveness that infants build from their everyday experience, we recorded caregiver responsiveness to their infants’ vocalizations during unstructured play. During the still-face, only 5-month-old infants showed an increase in vocalizing (a vocal extinction burst) indicating that they had learned to expect adult responses to their vocalizations. Caregiver responsiveness predicted the magnitude of the vocal extinction burst for 5-month-olds. Because 5-month-olds show a vocal extinction burst with unfamiliar adults, they must have generalized the social efficacy of their vocalizations beyond their familiar caregiver. Caregiver responsiveness to infant vocalizations during unstructured play was similar for 2- and 5-month-olds. Infants thus learn the social efficacy of their vocalizations between 2 and 5 months of age. During this time, infants build associations between their own non-cry sounds and the reactions of adults, which allows learning of the instrumental value of vocalizing.  相似文献   

17.
Previous research has demonstrated that even brief exposures to facial expressions of emotions elicit facial mimicry in receivers in the form of corresponding facial muscle movements. As well, vocal and verbal patterns of speakers converge in conversations, a type of vocal mimicry. There is also evidence of cross-modal mimicry in which emotional vocalizations elicit corresponding facial muscle activity. Further, empathic capacity has been associated with enhanced tendency towards facial mimicry as well as verbal synchrony. We investigated a type of potential cross-modal mimicry in a simulated dyadic situation. Specifically, we examined the influence of facial expressions of happy, sad, and neutral emotions on the vocal pitch of receivers, and its potential association with empathy. Results indicated that whereas both mean pitch and variability of pitch varied somewhat in the predicted directions, empathy was correlated with the difference in the variability of pitch while speaking to the sad and neutral faces. Discussion of results considers the dimensional nature of emotional vocalizations and possible future directions.  相似文献   

18.
In comparison with other modalities, the recognition of emotion in music has received little attention. An unexplored question is whether and how emotion recognition in music changes as a function of ageing. In the present study, healthy adults aged between 17 and 84 years (N=114) judged the magnitude to which a set of musical excerpts (Vieillard et al., 2008) expressed happiness, peacefulness, sadness and fear/threat. The results revealed emotion-specific age-related changes: advancing age was associated with a gradual decrease in responsiveness to sad and scary music from middle age onwards, whereas the recognition of happiness and peacefulness, both positive emotional qualities, remained stable from young adulthood to older age. Additionally, the number of years of music training was associated with more accurate categorisation of the musical emotions examined here. We argue that these findings are consistent with two accounts on how ageing might influence the recognition of emotions: motivational changes towards positivity and, to a lesser extent, selective neuropsychological decline.  相似文献   

19.
Emotion recognition is mediated by a complex network of cortical and subcortical areas, with the two hemispheres likely being differently involved in processing positive and negative emotions. As results on valence-dependent hemispheric specialisation are quite inconsistent, we carried out three experiments with emotional stimuli with a task being sensitive to measure specific hemispheric processing. Participants were required to bisect visual lines that were delimited by emotional face flankers, or to haptically bisect rods while concurrently listening to emotional vocal expressions. We found that prolonged (but not transient) exposition to concurrent happy stimuli significantly shifted the bisection bias to the right compared to both sad and neutral stimuli, indexing a greater involvement of the left hemisphere in processing of positively connoted stimuli. No differences between sad and neutral stimuli were observed across the experiments. In sum, our data provide consistent evidence in favour of a greater involvement of the left hemisphere in processing positive emotions and suggest that (prolonged) exposure to stimuli expressing happiness significantly affects allocation of (spatial) attentional resources, regardless of the sensory (visual/auditory) modality in which the emotion is perceived and space is explored (visual/haptic).  相似文献   

20.
This work reports longitudinal evaluation of the temporal relationships between gaze and vocal behavior addressed to interactive partners (mother or experimenter) in a free-play situation. Thirteen children were observed at the ages of 1;0 and 1;8 during laboratory sessions, and video recordings of free-play interactions with mother and a female experimenter were coded separately for children's vocal behavior (vocalizations and words) and gaze toward their interactive partners. The difference between the observed and expected cooccurrence of these two communicative behaviors was evaluated by transformation into z-scores. The most important findings are related to differences in the temporal relationship observed at age 1;0 between gaze and vocalizations and at age 1;8 between gaze and words. At the earlier age, the infants who exhibited greater coordination between gaze and vocal behavior than was expected by chance (z-score > +1.96) preferred to look at the interlocutor at the beginning of the vocal turn. Instead, when they were older and began to produce words, they frequently looked at the interlocutor at the end of the vocal turn. These results are interpreted as referring to characteristics of conversational competence in the prelinguistic and linguistic periods. Moreover, looking at the interlocutor at the beginning of the vocal turn at age 1;0 was found to be related to language production at age 1:8, highlighting a significant relationship between conversational competence during the prelinguistic period and language acquisition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号