首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4 s or 12 s. Linear correlation coefficients > .89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, ‘indirect’ loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.  相似文献   

2.
Past research has shown that variations in musical tempo influence the perceived rate of visual motion. The goal here was to investigate whether this effect is influenced by audiovisual affect. Participants were presented with montages (slideshows) of positive or negative scenes accompanied by positive or negative music whose rate was either the same as, or 15% faster or slower than that of the montage. The results of a subsequent recognition task showed a higher false alarm rate to faster and slower visual scenes in the presence of accelerated and decelerated soundtracks, respectively. Moreover, the magnitude of these effects significantly increased when music–montage pairs displayed a positive and negative affect, respectively. In contrast, variations in visual rate exerted no influence on auditory rate recognition. These findings have implications for audiovisual art forms as well as theories of cross‐modal perception.  相似文献   

3.
Cassidy, G.G. & MacDonald, R.A.R. (2010). The effects of music on time perception and performance of a driving game. Scandinavian Journal of Psychology 51, 455–464. There is an established and growing body of evidence highlighting that music can influence behavior across a range of diverse domains ( Miell, MacDonald, & Hargreaves 2005 ). One area of interest is the monitoring of “internal timing mechanisms”, with features such as tempo, liking, perceived affective nature and everyday listening contexts implicated as important ( North & Hargreaves, 2008 ). The current study addresses these issues by comparing the effects of self‐selected and experimenter‐selected music (fast and slow) on actual and perceived performance of a driving game activity. Seventy participants completed three laps of a driving game in seven sound conditions: (1) silence; (2) car sounds; (3) car sounds with self‐selected music, and car sounds with experimenter‐selected music; (4) high‐arousal (70 bpm); (5) high‐arousal (130 bpm); (6) low‐arousal (70 bpm); and (7) low‐arousal (130 bpm) music. Six performance measures (time, accuracy, speed, and retrospective perception of these), and four experience measures (perceived distraction, liking, appropriateness and enjoyment) were taken. Exposure to self‐selected music resulted in overestimation of elapsed time and inaccuracy, while benefiting accuracy and experience. In contrast, exposure to experimenter‐selected music resulted in poorest performance and experience. Increasing the tempo of experimenter‐selected music resulted in faster performance and increased inaccuracy for high‐arousal music, but did not impact experience. It is suggested that personal meaning and subjective associations connected to self‐selected music promoted increased engagement with the activity, overriding detrimental effects attributed to unfamiliar, less liked and less appropriate experimenter‐selected music.  相似文献   

4.
Accurate perception and production of emotional states is important for successful social interactions across the lifespan. Previous research has shown that when identifying emotion in faces, preschool children are more likely to confuse emotions that share valence, but differ in arousal (e.g. sadness and anger) than emotions that share arousal, but differ on valence (e.g. anger and joy). Here, we examined the influence of valence and arousal on children's production of emotion in music. Three‐, 5‐ and 7‐year‐old children recruited from the greater Hamilton area (N = 74) ‘performed’ music to produce emotions using a self‐pacing paradigm, in which participants controlled the onset and offset of each chord in a musical sequence by repeatedly pressing and lifting the same key on a MIDI piano. Key press velocity controlled the loudness of each chord. Results showed that (a) differentiation of emotions by 5‐year‐old children was mainly driven by arousal of the target emotion, with differentiation based on both valence and arousal at 7 years and (b) tempo and loudness were used to differentiate emotions earlier in development than articulation. The results indicate that the developmental trajectory of emotion understanding in music may differ from the developmental trajectory in other domains.  相似文献   

5.
This study investigated whether the tempo and timbre of background music influenced responses to radio ads. In Experiment 1 (in addition to a no‐music control condition), slow‐ or fast‐tempo background music was superimposed over the same ad. The slow‐tempo music treatment produced significantly higher levels of ad content recall compared to the fast‐tempo music treatment. Musical presence (slow‐ and fast‐tempo treatments combined vs. no‐music) significantly reduced levels of ad content recall. In Experiment 2, when three versions of digitally produced background music timbres were superimposed over a no‐music version of another ad, results revealed positive main effects of timbre congruity upon recall of ad content and affective responses to the ad. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
The automobile is currently the most popular and frequently reported location for listening to music. Yet, not much is known about the effects of music on driving performance, and only a handful of studies report that music-evoked arousal generated by loudness decreases automotive performance. Nevertheless, music tempo increases driving risks by competing for attentional space; the greater number of temporal events which must be processed, and the frequency of temporal changes which require larger memory storage, distract operations and optimal driving capacities. The current study explored the effects of music tempo on PC-controlled simulated driving. It was hypothesized that simulated driving while listening to fast-paced music would increase heart rate (HR), decrease simulated lap time, and increase virtual traffic violations. The study found that music tempo consistently affected both simulated driving speed and perceived speed estimates: as the tempo of background music increased, so too did simulated driving speed and speed estimate. Further, the tempo of background music consistently affected the frequency of virtual traffic violations: disregarded red traffic-lights (RLs), lane crossings (LNs), and collisions (ACs) were most frequent with fast-paced music. The number of music-related automobile accidents and fatalities is not a known statistic. Police investigators, drivers, and traffic researchers themselves are not mindful of the risks associated with listening to music while driving. Implications of the study point to a need for drivers' education courses to raise public awareness about the effects of music during driving.  相似文献   

7.
The present study addressed the effect of loudness and tempo on kinematics and muscular activities of the upper extremity during repetitive piano keystrokes. Eighteen pianists with professional music education struck two keys simultaneously and repetitively with a combination of four loudness levels and four tempi. The results demonstrated a significant interaction effect of loudness and tempo on peak angular velocity for the shoulder, elbow, wrist and finger joints, mean muscular activity for the corresponding flexors and extensors, and their co-activation level. The interaction effect indicated greater increases with tempo when eliciting louder tones for all joints and muscles except for the elbow velocity showing a greater decrease with tempo. Multiple-regression analysis and K-means clustering further revealed that 18 pianists were categorized into three clusters with different interaction effects on joint kinematics. These clusters were characterized by either an elbow-velocity decrease and a finger-velocity increase, a finger-velocity decrease with increases in shoulder and wrist velocities, or a large elbow-velocity decrease with a shoulder-velocity increase when increasing both loudness and tempo. Furthermore, the muscular load considerably differed across the clusters. These findings provide information to determine muscles with the greatest potential risk of playing-related disorders based on movement characteristics of individual pianists.  相似文献   

8.
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.  相似文献   

9.
Context effects, intraindividual variability, and internal consistency of intermodal joint scaling with magnitude estimation (“magnitude matching”) were studied by instructing 12 subjects to judge the three pairs of odor intensity, loudness, and brightness on a common scale of perceived intensity as well as to judge odor intensity separately (unimodal magnitude estimation). Significant context effects were found by comparing odor intensity judgments obtained by separate versus intermodal joint scaling as well as across different modalities (loudness vs. brightness) in joint scaling. But no such effects were found for loudness or brightness when compared across modality of joint scaling. Intraindividual variability in the estimates imply about equal reliability in intermodal joint scaling and separate scaling. Good internal consistency was found, indicating that subjects are successful in expressing perceived intensities of different modalities on a common scale.  相似文献   

10.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

11.
With today's high degree of advertising clutter, marketers might greatly focus on evoking emotion or creating hedonic (e.g., feeling) experiences for consumers in order to improve practice. These strategies minimize the effort needed to process a message and can influence consumers' decisions. In 4 studies, we examine the effects of music tempo on consumers' attitudes toward the brand while further considering the mediating role of evoked feelings. Study 1 and 2 supports that music tempo in commercials influences consumers' affective response to the music in advertising. Study 3 replicated this effect using a controlled experiment and extended the research by demonstrating that tempo also affects general mood states, in addition to feelings evoked by the music. Last, Study 4 demonstrates that need for emotion moderates the role of affect as information. This research contributes to theory in sensory marketing and consumer behavior and offers practical implications to improve marketing practice.  相似文献   

12.
Objectives. To examine: (a) the effect of music type on running time and on sensations and thoughts experienced by the runners under high physical exertion, and (b) the role that music plays in the use of two distinct self-regulation techniques during high exertion, namely dissociative and motivational.Design and procedure. Three studies were conducted. In Study 1 and Study 2, performed in the laboratory, participants ran at 90% of their maximal oxygen uptake on a motorized treadmill four times, once each with rock, dance, and inspirational music, and once without attending to music. Ratings of perceived exertion (RPE) and heart rate (HR) were monitored during the run, and discomfort symptoms and music-specific questions were examined. In Study 3, performed in the field, participants ran a hilly course eight times, four under a competitive-pair condition, and four under a single-mode condition. Running time was the dependent variable.Results. Music failed to influence HR, RPE, and sensations of exertion in the three studies. However, about 30% of the participants indicated that the music helped them at the beginning of the run. The participants stated that music both directed their attention to the music and motivated them to continue. Despite the heavy workload reported by the runners, running with music was perceived as beneficial by many.Conclusions: People engaged in high intensity running may benefit from listening to music, but may not increase their ability to sustain that effort longer than they could without music. Further research that incorporates personal music type and rhythm preferences should be carried out in order to advance this line of inquiry.  相似文献   

13.
This study uses Mehrabian and Russell's ( 1974 ) Pleasure‐Arousal‐Dominance (PAD) model to consider how responses to both the music heard and overall in‐situ listening experience are influenced by the listener's degree of control over music selected for a particular listening episode and the location in which the listening takes place. Following recruitment via campus advertisements and a university research participation program, 216 individuals completed a background questionnaire and music listening task in a 3 (location) × 2 (experimenter‐ or participant‐selected music) design. After the listening task, participants completed a short questionnaire concerning the music they heard and the overall in‐situ listening experience. Results demonstrated that there was a positive relationship between control and liking for the music and episode, whether the former was considered in terms of: (1) whether the music was self‐selected or experimenter‐selected or (2) overt ratings of perceived control. Furthermore, the location and liking for the music were related to people's judgments of their enjoyment of the overall experience. This research indicates that the PAD model is a useful framework for understanding everyday music listening and supports the contention that, in a musical context, dominance may be operationalized as control over the music.  相似文献   

14.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.  相似文献   

15.
When people listen to music, they hear beat and a metrical structure in the rhythm; these perceived patterns enable coordination with the music. A clear correspondence between the tempo of actual movement (e.g., walking) and that of music has been demonstrated, but whether similar coordination occurs during motor imagery is unknown. Twenty participants walked naturally for 8 m, either physically or mentally, while listening to slow and fast music, or not listening to anything at all (control condition). Executed and imagined walking times were recorded to assess the temporal congruence between physical practice (PP) and motor imagery (MI). Results showed a difference when comparing slow and fast time conditions, but each of these durations did not differ from soundless condition times, hence showing that body movement may not necessarily change in order to synchronize with music. However, the main finding revealed that the ability to achieve temporal congruence between PP and MI times was altered when listening to either slow or fast music. These data suggest that when physical movement is modulated with respect to the musical tempo, the MI efficacy of the corresponding movement may be affected by the rhythm of the music. Practical applications in sport are discussed as athletes frequently listen to music before competing while they mentally practice their movements to be performed.  相似文献   

16.
How does context affect basic processes of sensory integration and the implicit psychophysical scales that underlie those processes? Five experiments examined how stimulus range and response regression determine characteristics of (a) psychophysical scales for loudness and (b) 3 kinds of intensity summation: binaural loudness summation, summation of loudness between tones widely spaced in frequency, and temporal loudness summation. Context affected the overt loudness scales in that smaller power-function exponents characterized larger versus smaller range of stimulation and characterized magnitude estimation versus magnitude production. More important, however, context simultaneously affected the degree of loudness integration as measured in terms of matching stimulus levels. Thus, stimulus range and scaling procedure influence not only overt response scales, but measures of underlying intensity processing.  相似文献   

17.
Four studies present the first evidence showing that public (vs. private) provocation augments triggered displaced aggression by increasing the perceived intensity of the provocation. This effect is shown to be independent of face‐saving motivation. Following a public or private provocation, Study 1 participants were induced to ruminate or were distracted for 20 min. They then had an opportunity to aggress against another person who either acted in a neutral or mildly annoying fashion (viz. triggering event). As expected, the magnitude of the greater displaced aggression of those who ruminated before the triggering event compared with those distracted was greater under public than private provocation. Study 2 replicated the findings of Study 1 and confirmed that public provocations are experienced as more intense. Studies 3 and 4 both manipulated provocation intensity directly to show that it mediated the moderating effect of public/private provocation found in Study 1. The greater intensity of a public provocation increases reactivity to a subsequent trigger, which in turn, augments triggered displaced aggression. Aggr. Behav. 39:13‐29, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

18.
The present study reexamined the mood-mediation hypothesis for explaining background-music-dependent effects in free recall. Experiments 1 and 2 respectively examined tempo- and tonality-dependent effects in free recall, which had been used as evidence for the mood-mediation hypothesis. In Experiments 1 and 2, undergraduates (n?=?75 per experiment) incidentally learned a list of 20 unrelated words presented one by one at a rate of 5 s per word and then received a 30-s delayed oral free-recall test. Throughout the study and test sessions, a piece of music was played. At the time of test, one third of the participants received the same piece of music with the same tempo or tonality as at study, one third heard a different piece with the same tempo or tonality, and one third heard a different piece with a different tempo or tonality. Note that the condition of the same piece with a different tempo or tonality was excluded. Furthermore, the number of sampled pieces of background music was increased compared with previous studies. The results showed neither tempo- nor tonality-dependent effects, but only a background-music-dependent effect. Experiment 3 (n?=?40) compared the effects of background music with a verbal association task and focal music (only listening to musical selections) on the participants’ moods. The results showed that both the music tempo and tonality influenced the corresponding mood dimensions (arousal and pleasantness). These results are taken as evidence against the mood-mediation hypothesis. Theoretical implications are discussed.  相似文献   

19.
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non‐Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross‐grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition.  相似文献   

20.
Influence of music on Wingate Anaerobic Test performance   总被引:2,自引:0,他引:2  
While several studies have investigated the effects of music on cardiovascular endurance performance and perceived exertion during exercise of moderate intensity, few studies have investigated such effects on supramaximal exercise bouts. The purpose of the present study was to assess whether music affects performance on the Wingate Anaerobic Test. Each of the 12 men and 3 women were required to report to the laboratory on two occasions, once for tests in the music condition and once for tests in the nonmusic condition. Conditions were randomly ordered. All music selections were set at the same tempo. On each test day subjects performed a series of three Wingate Anaerobic Tests with 30-sec. rests in between. On Test 3 subjects were asked to continue pedaling until fatigued. Mean Power Output, Maximum Power Output, Minimum Power Output, and Fatigue Index were compared between conditions for each test using a repeated-measures analysis of variance. Time to fatigue on Trial 3 compared by analysis of variance gave no significant differences between conditions for any measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号