首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers’ abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper’s influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N?=?319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0–10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.  相似文献   

2.
Music is often described in terms of emotion. This notion is supported by empirical evidence showing that engaging with music is associated with subjective feelings, and with objectively measurable responses at the behavioural, physiological, and neural level. Some accounts, however, reject the idea that music may directly induce emotions. For example, the ‘paradox of negative emotion’, whereby music described in negative terms is experienced as enjoyable, suggests that music might move the listener through indirect mechanisms in which the emotional experience elicited by music does not always coincide with the emotional label attributed to it.Here we discuss the role of metaphor as a potential mediator in these mechanisms. Drawing on musicological, philosophical, and neuroscientific literature, we suggest that metaphor acts at key stages along and between physical, biological, cognitive, and contextual processes, and propose a model of music experience in which metaphor mediates between language, emotion, and aesthetic response.  相似文献   

3.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

4.
Research suggests that infants progress from discrimination to recognition of emotions in faces during the first half year of life. It is unknown whether the perception of emotions from bodies develops in a similar manner. In the current study, when presented with happy and angry body videos and voices, 5-month-olds looked longer at the matching video when they were presented upright but not when they were inverted. In contrast, 3.5-month-olds failed to match even with upright videos. Thus, 5-month-olds but not 3.5-month-olds exhibited evidence of recognition of emotions from bodies by demonstrating intermodal matching. In a subsequent experiment, younger infants did discriminate between body emotion videos but failed to exhibit an inversion effect, suggesting that discrimination may be based on low-level stimulus features. These results document a developmental change from discrimination based on non-emotional information at 3.5 months to recognition of body emotions at 5 months. This pattern of development is similar to face emotion knowledge development and suggests that both the face and body emotion perception systems develop rapidly during the first half year of life.  相似文献   

5.
ABSTRACT

In ultra-rapid categorization studies, population-level reaction time differences in performance are consistently reported. In a previous study, we replicated these findings and also observed consistent gender differences in young adults (18–24 years old). We now tested a group of adolescents (11–16 years old) on the same ultra-rapid categorization tasks. Results indicated that age had a significant impact on categorization performance. Although women outperformed men during adulthood, this effect reversed in adolescence (boys faster than girls). This gender x age interaction for categorizing meaningful (non-)social visual scenes could be caused by gender-specific development processes underlying emotion regulation strategies and/or context sensitivity.  相似文献   

6.
The Levels of Emotional Awareness Scale (LEAS) developed by Lane et al. ( 1990 1990) measures the ability of a subject to discriminate his or her own emotional state and that of others. The scale is based on a cognitive‐developmental model in which emotional awareness increases in a similar fashion to intellectual functions. Because studies performed using North American and German populations have demonstrated an effect of age, gender, and level of education on the ability to differentiate emotional states, our study attempts to evaluate whether these factors have the same effects in a general French population. 750 volunteers (506 female, 244 male), who were recruited from three regions of France (Lille, Montpellier, Paris), completed the LEAS. The sample was divided into five age groups and three education levels. The results of the LEAS scores for self and others and the total score showed a difference in the level of emotional awareness for different age groups, by gender and education level. A higher emotional level was observed for younger age groups, suggesting that emotional awareness depends on the cultural context and generational societal teachings. Additionally, the level of emotional awareness was higher in women than in men and lower in individuals with less education. This result might be explained by an educational bias linked to gender and higher education whereby expressive ability is reinforced. In addition, given the high degree of variability in previously observed scores in the French population, we propose a standard based on our French sample.  相似文献   

7.
The remarkable contributors to this special issue highlight the importance of developmental research on emotion and its regulation, as well as its conceptual and methodological challenges. This commentary offers some additional thoughts, especially concerning alternative views of the convergence of multiple measures of emotional responding, the conceptualization of emotion and emotion regulation, and future directions for work in this field. In the end, in light of the complex construction of emotion and its development, we may learn from studying the divergence among multiple components of emotional responding as we do from expectations of their convergence. In each case, some assembly is required.  相似文献   

8.
The tendency for emotions to be predictable over time, labelled emotional inertia, has been linked to low well-being and is thought to reflect impaired emotion regulation. However, almost no studies have examined how emotion regulation relates to emotional inertia. We examined the effects of cognitive reappraisal and expressive suppression on the inertia of behavioural, subjective and physiological measures of emotion. In Study 1 (N = 111), trait suppression was associated with higher inertia of negative behaviours. We replicated this finding experimentally in Study 2 (N = 186). Furthermore, in Study 2, instructed suppressors and reappraisers both showed higher inertia of positive behaviours, and reappraisers displayed higher inertia of heart rate. Neither suppression nor reappraisal were associated with the inertia of subjective feelings in either study. Thus, the effects of suppression and reappraisal on the temporal dynamics of emotions depend on the valence and emotional response component in question.  相似文献   

9.
The more immigrant minorities are exposed to the majority culture, the more their emotional pattern fits that of majority culture members—a phenomenon termed emotional acculturation. To assess emotional fit, earlier studies compared minorities’ emotional experience with that of separate samples of “distant” majority members in their country of residence. We added “proximal” fit with the emotional experience of majority members in their social environment. Drawing on large random samples of immigrant minority and majority youth in Belgian high schools (N = 2,543), our study aimed (i) to test majority culture exposure and contact as predictors of emotional fit and (ii) to distinguish emotional fit with distal and proximal variants of majority culture. Minorities’ majority culture exposure predicted both distal and proximal emotional fit. In addition, contact with majority peers better predicted proximal fit. Our findings suggest that emotional acculturation is socially grounded in interactions with proximal majority members.  相似文献   

10.
Over the last two decades, it has been established that children's emotion understanding changes as they develop. Recent studies have also begun to address individual differences in children's emotion understanding. The first goal of this study was to examine the development of these individual differences across a wide age range with a test assessing nine different components of emotion understanding. The second goal was to examine the relation between language ability and individual differences in emotion understanding. Eighty children ranging in age from 4 to 11 years were tested. Children displayed a clear improvement with age in both their emotion understanding and language ability. In each age group, there were clear individual differences in emotion understanding and language ability. Age and language ability together explained 72% of emotion understanding variance; 20% of this variance was explained by age alone and 27% by language ability alone. The results are discussed in terms of their theoretical and practical implications.  相似文献   

11.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

12.
I discuss the merits and demerits of the contributions to the present issue as I see them, and their implications for emotions research.  相似文献   

13.
This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e.g., happy face with happy body) and incongruent (e.g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body.  相似文献   

14.
This study examined relationships among parents’ physiological regulation, their emotion socialization behaviors, and their children’s emotion knowledge. Parents’ resting cardiac vagal tone was measured, and parents provided information regarding their socialization behaviors and family emotional expressiveness. Their 4- or 5-year-old children (N = 42) participated in a laboratory session in which their knowledge of emotional facial expressions and situations was tested and their own resting vagal tone was monitored. Results showed that parents’ vagal tone was related to their socialization behaviors, and several parent socialization variables were related to their children’s emotion knowledge. These findings suggest that parents’ physiological regulation may affect the emotional development of their children by influencing their parenting behaviors.  相似文献   

15.
This study examined gender differences in emotion word use during mother–child and father–child conversations. Sixty‐five Spanish mothers and fathers and their 4‐ (= 53.50, SD = 3.54) and 6‐year‐old (= 77.07, SD = 3.94) children participated in this study. Emotion talk was examined during a play‐related storytelling task and a reminiscence task (conversation about past experiences). Mothers mentioned a higher proportion of emotion words than did fathers. During the play‐related storytelling task, mothers of 4‐year‐old daughters mentioned a higher proportion of emotion words than did mothers of 4‐year‐old sons, whereas fathers of 4‐year‐old daughters directed a higher proportion of emotion words than did fathers of 4‐year‐old sons during the reminiscence task. No gender differences were found with parents of 6‐year‐old children. During the reminiscence task daughters mentioned more emotion words with their fathers than with their mothers. Finally, mothers' use of emotion talk was related to whether children used emotion talk in both tasks. Fathers' use of emotion talk was only related to children's emotion talk during the reminiscence task.  相似文献   

16.
17.
Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.  相似文献   

18.
19.
We examined the moderating influence of dispositional behavioral inhibition system (BIS) and behavioral activation system (BAS) sensitivities on the relationship of startling background music with emotion-related subjective and physiological responses elicited during reading news reports, and with memory performance among 26 adult men and women. Physiological parameters measured were respiratory sinus arrhythmia (RSA), electrodermal activity (EDA), and facial electromyography (EMG). The results showed that, among high BAS individuals, news stories with startling background music were rated as more interesting and elicited higher zygomatic EMG activity and RSA than news stories with non-startling music. Among low BAS individuals, news stories with startling background music were rated as less pleasant and more arousing and prompted higher EDA. No BIS-related effects or effects on memory were found. Startling background music may have adverse (e.g., negative arousal) or beneficial effects (e.g., a positive emotional state and stronger positive engagement) depending on dispositional BAS sensitivity of an individual. Actual or potential applications of this research include the personalization of media presentations when using modern media and communications technologies.  相似文献   

20.
ABSTRACT

The current study examined age and gender effects on spiritual development among early adolescents. A total sample of 416 Czech adolescents, age ranged from 11 to 15 years, was analysed for the study. Data was collected employing a non-experimental survey design by utilizing a self-administered questionnaire. A series of independent t-tests were performed to determine whether there were significant age and gender differences across the spirituality indicators: spiritual well-being, spiritual belief, and experiential spirituality. Results indicated that 11-year-old adolescents were more likely to demonstrate a higher level of spiritual well-being and spiritual belief compared with those 15-year-old; while 15-year-old adolescents were more likely to score high in experiential spirituality than their younger counterparts. Regarding gender, girls were more likely than boys to demonstrate a higher level of spirituality score. Practitioners in education and psychology should be mindful of the use of spirituality interventions applying the respective forms and practices according to age and gender to better promote positive youth development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号