首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Do children use the same properties as adults in determining whether music sounds happy or sad? We addressed this question with a set of 32 excerpts (16 happy and 16 sad) taken from pre-existing music. The tempo (i.e. the number of beats per minute) and the mode (i.e. the specific subset of pitches used to write a given musical excerpt) of these excerpts were modified independently and jointly in order to measure their effects on happy-sad judgments. Adults and children from 3 to 8 years old were required to judge whether the excerpts were happy or sad. The results show that as adults, 6--8-year-old children are affected by mode and tempo manipulations. In contrast, 5-year-olds' responses are only affected by a change of tempo. The youngest children (3--4-year-olds) failed to distinguish the happy from the sad tone of the music above chance. The results indicate that tempo is mastered earlier than mode to infer the emotional tone conveyed by music.  相似文献   

2.
Judgement of emotion conveyed by music is determined notably by mode (major-minor) and tempo (fast-slow). This suggestion was examined using the same set of equitone melodies, in two experiments. Melodies were presented to nonmusicians who were required to judge whether the melodies sounded “happy” or “sad” on a 10-point scale. In order to assess the specific and relative contributions of mode and tempo to these emotional judgements, the melodies were manipulated so that the only verying characteristic was either the mode or the tempo in two “isolated” conditions. In two further conditions, mode and tempo manipulations were combined so that mode and tempo either converged towards the same emotion (Convergent condition) or suggested opposite emotions (Divergent condition). The results confirm that both mode and tempo determine the “happy-sad” judgements in isolation, with the tempo being more salient, even when tempo salience was adjusted. The findings further support the view that, in music, structural features that are emotionally meaningful are easy to isolate, and that music is an effective and reliable medium to study emotions.  相似文献   

3.
This experiment addressed the question of whether children's own emotional states influence their accuracy in recognizing emotional states in peers and any motives they may have to intervene in order to change their peers' emotional states. Happiness, sadness, anger, or a neutral state were induced in preschool children, who then viewed slides of other 4-year-old children who were actually experiencing each of those states. Children's own emotional states influenced only their perception of sadness in peers. Sad emotional states promoted systematic inaccuracies in the perception of sadness, causing children to mislabel sadness in peers as anger. Children had high base rates for using the label “happy,” and this significantly enhanced their accuracy in recognizing that state. Low base rates for labeling others as in a neutral state reduced accuracy in recognizing neutrality. Children were generally motivated to change sad, angry, and neutral states in peers, and they were most motivated to change a peer's state if they were to be the agent of such change. The results are discussed in terms of the limited role of children's own emotional states in their recognition of emotion in others or motives to intervene and in terms of factors influencing the perception of emotion, such as base rate preferences for labeling others as experiencing, or not experiencing, particular emotional states.  相似文献   

4.
When and how does one learn to associate emotion with music? This study attempted to address this issue by examining whether preschool children use tempo as a cue in determining whether a song is happy or sad. Instrumental versions of children's songs were played at different tempos to adults and children ages 3 to 5 years. Familiar and unfamiliar songs were used to examine whether familiarity affected children's identification of emotion in music. The results indicated that adults, 4 year olds and 5 year olds rated fast songs as significantly happier than slow songs. However, 3 year olds failed to rate fast songs differently than slow songs at above-chance levels. Familiarity did not significantly affect children's identification of happiness and sadness in music.  相似文献   

5.
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults’ perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants’ rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.  相似文献   

6.
Effects of emotional state on lexical decision performance   总被引:2,自引:0,他引:2  
The effect of emotional state on lexical processing was investigated. Subjects were randomly assigned to either a happy or sad mood condition. Emotional state was then induced by listening to 8 min of classical music previously rated to induce happy or sad moods. Response times and error rates were analyzed in a lexical decision task involving sad words, happy words, and pseudowords. Results suggest that emotion aided the participants in responding to emotion-congruent stimuli. The sad group responded faster than the happy group to sad words and the happy group responded faster than the sad group to happy words. Results are discussed with regard to information processing and emotion.  相似文献   

7.
Biological motion perception can be assessed using a variety of tasks. In the present study, 8- to 11-year-old children born prematurely at very low birth weight (<1500 g) and matched, full-term controls completed tasks that required the extraction of local motion cues, the ability to perceptually group these cues to extract information about body structure, and the ability to carry out higher order processes required for action recognition and person identification. Preterm children exhibited difficulties in all 4 aspects of biological motion perception. However, intercorrelations between test scores were weak in both full-term and preterm children—a finding that supports the view that these processes are relatively independent. Preterm children also displayed more autistic-like traits than full-term peers. In preterm (but not full-term) children, these traits were negatively correlated with performance in the task requiring structure-from-motion processing, r(30) = ?.36, p < .05), but positively correlated with the ability to extract identity, r(30) = .45, p < .05). These findings extend previous reports of vulnerability in systems involved in processing dynamic cues in preterm children and suggest that a core deficit in social perception/cognition may contribute to the development of the social and behavioral difficulties even in members of this population who are functioning within the normal range intellectually. The results could inform the development of screening, diagnostic, and intervention tools.  相似文献   

8.
This study investigated the effects of emotional response on an inhibitory task, the Stroop‐like day‐night task, in which participants are presented with two pictures. They are then requested to inhibit naming what the card shown to them represents and instead state what the other card represents. Specifically, 35 4‐ to 6‐year‐old children and 15 young adults were administered the emotion‐related happy‐sad task and the emotion‐unrelated up‐down task using the same stimulus set (happy and sad cartoon faces). The results suggested that vulnerability to errors in the happy‐sad task was not derived from increased inhibitory demand. The results also suggested that the happy‐sad task is more inhibitory‐demanding in terms of response time. These results suggested that the happy‐sad task elicits interference more than other variants of this task, not because the task involves emotional stimuli per se but because the task involves both emotional stimuli and emotional responses.  相似文献   

9.
Music in the major mode is often associated with happy feelings, which could enhance task performance, compared with that in the minor mode, which is associated more with sadness. Male and female participants (N = 48) completed written verbal and spatial reasoning tests while a piece of music in F major by Handel was being played, and again when the same piece was digitally manipulated to create a version in the minor mode. The confounding variable of using two different compositions was thus avoided. Results showed that the music in the major mode was rated more emotionally positive by both sexes than was the minor mode version (p ≤ .001). Performance by females on verbal tasks was significantly enhanced with major mode music, compared with the minor (p = .018), but there were no such findings for other combinations of sex and task. Also with major mode music only, there were trends for females to score higher than males on verbal tasks, and for males to score the highest on spatial tasks. Reasons for the research findings are suggested.  相似文献   

10.
Contradicting evidence exists regarding the link between loneliness and sensitivity to facial cues of emotion, as loneliness has been related to better but also to worse performance on facial emotion recognition tasks. This study aims to contribute to this debate and extends previous work by (a) focusing on both accuracy and sensitivity to detecting positive and negative expressions, (b) controlling for depressive symptoms and social anxiety, and (c) using an advanced emotion recognition task with videos of neutral adolescent faces gradually morphing into full-intensity expressions. Participants were 170 adolescents (49% boys; Mage?=?13.65 years) from rural, low-income schools. Results showed that loneliness was associated with increased sensitivity to happy, sad, and fear faces. When controlling for depressive symptoms and social anxiety, loneliness remained significantly associated with sensitivity to sad and fear faces. Together, these results suggest that lonely adolescents are vigilant to negative facial cues of emotion.  相似文献   

11.
Some evidence indicates that emotional reactions to music can be organized along a bipolar valence dimension ranging from pleasant states (e.g., happiness) to unpleasant states (e.g., sadness), but songs can contain some cues that elicit happiness (e.g., fast tempos) and others that elicit sadness (e.g., minor modes). Some models of emotion contend that valence is a basic building block of emotional experience, which implies that songs with conflicting cues cannot make people feel happy and sad at the same time. Other models contend that positivity and negativity are separable in experience, which implies that music with conflicting cues might elicit simultaneously mixed emotions of happiness and sadness. Hunter, Schellenberg, and Schimmack (2008) tested these possibilities by having subjects report their happiness and sadness after listening to music with conflicting cues (e.g., fast songs in minor modes) and consistent cues (e.g., fast songs in major modes). Results indicated that music with conflicting cues elicited mixed emotions, but it remains unclear whether subjects simultaneously felt happy and sad or merely vacillated between happiness and sadness. To examine these possibilities, we had subjects press one button whenever they felt happy and another button whenever they felt sad as they listened to songs with conflicting and consistent cues. Results revealed that subjects spent more time simultaneously pressing both buttons during songs with conflicting, as opposed to consistent, cues. These findings indicate that songs with conflicting cues can simultaneously elicit happiness and sadness and that positivity and negativity are separable in experience.  相似文献   

12.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.  相似文献   

13.
The authors investigated the effects of an induced emotional mood state on lexical decision task (LDT) performance in 50 young adults and 25 older adults. Participants were randomly assigned to either happy or sad mood induction conditions. An emotional mood state was induced by having the participants listen to 8 min of classical music previously rated to induce happy or sad moods. Results replicated previous studies with young adults (i.e., sad-induced individuals responded faster to sad words and happy-induced individuals responded faster to happy words) and extended this pattern to older adults. Results are discussed with regard to information processing, aging, and emotion.  相似文献   

14.
The authors investigated the effects of an induced emotional mood state on lexical decision task (LDT) performance in 50 young adults and 25 older adults. Participants were randomly assigned to either happy or sad mood induction conditions. An emotional mood state was induced by having the participants listen to 8 min of classical music previously rated to induce happy or sad moods. Results replicated previous studies with young adults (i.e., sad-induced individuals responded faster to sad words and happy-induced individuals responded faster to happy words) and extended this pattern to older adults. Results are discussed with regard to information processing, aging, and emotion.  相似文献   

15.
It is possible that the visual discrimination of emotion categories and emotion word vocabulary develop via common emotion-specific processes. In contrast, it is possible that they develop with vocabulary development more generally. This study contrasts these two possibilities. Twenty-three 26-month-olds participated in a visual perceptual discrimination task involving emotional facial expressions. After familiarization to a 100% happy face, toddlers were tested for their visual preference for a novel sad face in a side-by-side presentation paired with the familiar happy face. Parental report was used to quantify production of emotion words and vocabulary generally. Visual preference for the novel emotion (sad) in the discrimination task correlated with emotion word vocabulary size but not with overall vocabulary size.  相似文献   

16.
Participants in manipulated emotional states played computerised movies in which facial expressions of emotion changed into categorically different expressions. The participants' task was to detect the offset of the initial expression. An effect of emotional state was observed such that individuals in happy states saw the offset of happiness (changing into sadness) at an earlier point in the movies than did those in sad states. Similarly, sad condition participants detected the offset of a sad expression changing into a happy expression earlier than did happy condition participants. This result is consistent with a proposed role of facial mimicry in the perception of change in emotional expression. The results of a second experiment provide additional evidence for the mimicry account. The Discussion focuses on the relationship between motor behaviour and perception.  相似文献   

17.
Previous research in the happy victimizer tradition indicated that preschool and early elementary school children attribute positive emotions to the violator of a moral norm, whereas older children attribute negative (moral) emotions. Cognitive and motivational processes have been suggested to underlie this developmental shift. The current research investigated whether making the happy victimizer task less cognitively demanding by providing children with alternative response formats would increase their attribution of moral emotions and moral motivation. In Study 1, 93 British children aged 4–7 years old responded to the happy victimizer questions either in a normal condition (where they spontaneously pointed with a finger), a wait condition (where they had to wait before giving their answers), or an arrow condition (where they had to point with a paper arrow). In Study 2, 40 Spanish children aged 4 years old responded to the happy victimizer task either in a normal or a wait condition. In both studies, participants' attribution of moral emotions and moral motivation was significantly higher in the conditions with alternative response formats (wait, arrow) than in the normal condition. The role of cognitive abilities for emotion attribution in the happy victimizer task is discussed.  相似文献   

18.
Multi-label tasks confound age differences in perceptual and cognitive processes. We examined age differences in emotion perception with a technique that did not require verbal labels. Participants matched the emotion expressed by a target to two comparison stimuli, one neutral and one emotional. Angry, disgusted, fearful, happy, and sad facial expressions of varying intensity were used. Although older adults took longer to respond than younger adults, younger adults only outmatched older adults for the lowest intensity disgust and fear expressions. Some participants also completed an identity matching task in which target stimuli were matched on personal identity instead of emotion. Although irrelevant to the judgment, expressed emotion still created interference. All participants were less accurate when the apparent difference in expressive intensity of the matched stimuli was large, suggesting that salient emotion cues increased difficulty of identity matching. Age differences in emotion perception were limited to very low intensity expressions.  相似文献   

19.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

20.
Functional hemispheric specialization in recognizing faces expressing emotions was investigated in 18 normal hearing and 18 congenitally deaf children aged 13-14 years. Three kinds of faces were presented: happy, to express positive emotions, sad, to express negative emotions, and neutral. The subjects' task was to recognize the test face exposed for 20 msec in the left or right visual field. The subjects answered by pointing at the exposed stimulus on the response card that contained three different faces. The errors committed in expositions of faces in the left and right visual field were analyzed. In the control group the right hemisphere dominated in case of sad and neutral faces. There were no significant differences in recognition of happy faces. The differentiated hemispheric organization pattern in normal hearing persons supports the hypothesis of different processing of positive and negative emotions expressed by faces. The observed hemispheric asymmetry was a result of two factors: (1) processing of faces as complex patterns requiring visuo-spatial analysis, and (2) processing of emotions contained in them. Functional hemispheric asymmetry was not observed in the group of deaf children for any kind of emotion expressed in the presented faces. The results suggest that lack of auditory experience influences the organization of functional hemispheric specialization. It can be supposed that in deaf children, the analysis of information contained in emotional faces takes place in both hemispheres.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号