首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For listeners to recognize words, they must map temporally distributed phonetic feature cues onto higher order phonological representations. Three experiments are reported that were performed to examine what information listeners extract from assimilated segments (e.g., place-assimilated tokens of cone that resemble comb) and how they interpret it. Experiment 1 employed form priming to demonstrate that listeners activate the underlying form of CONE, but not of its neighbor (COMB). Experiment 2 employed phoneme monitoring to show that the same assimilated tokens facilitate the perception of postassimilation context. Together, the results of these two experiments suggest that listeners recover both the underlying place of the modified item and information about the subsequent item from the same modified segment. Experiment 3 replicated Experiment 1, using different postassimilation contexts to demonstrate that context effects do not reflect familiarity with a given assimilation process. The results are discussed in the context of general auditory grouping mechanisms.  相似文献   

2.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.  相似文献   

3.
Three experiments were conducted in order to validate 56 musical excerpts that conveyed four intended emotions (happiness, sadness, threat and peacefulness). In Experiment 1, the musical clips were rated in terms of how clearly the intended emotion was portrayed, and for valence and arousal. In Experiment 2, a gating paradigm was used to evaluate the course for emotion recognition. In Experiment 3, a dissimilarity judgement task and multidimensional scaling analysis were used to probe emotional content with no emotional labels. The results showed that emotions are easily recognised and discriminated on the basis of valence and arousal and with relative immediacy. Happy and sad excerpts were identified after the presentation of fewer than three musical events. With no labelling, emotion discrimination remained highly accurate and could be mapped on energetic and tense dimensions. The present study provides suitable musical material for research on emotions.Keywords.  相似文献   

4.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.  相似文献   

5.
In the present study, the gating paradigm was used to measure how much perceptual information that was extracted from musical excerpts needs to be heard to provide judgments of familiarity and of emotionality. Nonmusicians heard segments of increasing duration (250, 500, 1,000 msec, etc.). The stimuli were segments from familiar and unfamiliar musical excerpts in Experiment 1 and from very moving and emotionally neutral musical excerpts in Experiment 2. Participants judged how familiar (Experiment 1) or how moving (Experiment 2) the excerpt was to them. Results show that a feeling of familiarity can be triggered by 500-msec segments, and that the distinction between moving and neutral can be made for 250-msec segments. This finding extends the observation of fast-acting cognitive and emotional processes from face and voice perception to music perception.  相似文献   

6.
Previous findings on streaming are generalized to sequences composed of more than 2 subsequences. A new paradigm identified whether listeners perceive complex sequences as a single unit (integrative listening) or segregate them into 2 (or more) perceptual units (stream segregation). Listeners heard 2 complex sequences, each composed of 1, 2, 3, or 4 subsequences. Their task was to detect a temporal irregularity within 1 subsequence. In Experiment 1, the smallest frequency separation under which listeners were able to focus on 1 subsequence was unaffected by the number of co-occurring subsequences; nonfocused sounds were not perceptually organized into streams. In Experiment 2, detection improved progressively, not abruptly, as the frequency separation between subsequences increased from 0.25 to 6 auditory filters. The authors propose a model of perceptual organization of complex auditory sequences.  相似文献   

7.
Musically trained and untrained listeners were required to listen to 27 musical excerpts and to group those that conveyed a similar emotional meaning (Experiment 1). The groupings were transformed into a matrix of emotional dissimilarity that was analysed through multidimensional scaling methods (MDS). A 3-dimensional space was found to provide a good fit of the data, with arousal and emotional valence as the primary dimensions. Experiments 2 and 3 confirmed the consistency of this 3-dimensional space using excerpts of only 1 second duration. The overall findings indicate that emotional responses to music are very stable within and between participants, and are weakly influenced by musical expertise and excerpt duration. These findings are discussed in light of a cognitive account of musical emotion.  相似文献   

8.
Summary Four experiments investigated the perception of tonal structure in polytonal music. The experiments used musical excerpts in which the upper stave of the music suggested a different key than the lower stave. In Experiment 1, listeners rated the goodness of fit of probe tones following an excerpt from Dubois's Circus. Results suggested that listeners were sensitive to two keys, and weighted them according to their perceived importance within the excerpt. Experiment 2 confirmed that music within each stave reliably conveyed key structure on its own. In Experiment 3, listeners rated probe tones following an excerpt from Milhaud's Sonata No. 1 for Piano, in which different keys were conveyed in widely separate pitch registers. Ratings were collected across three octaves. Listeners did not associate each key with a specific register. Rather, ratings for all three octave registers reflected only the key associated with the upper stave. Experiment 4 confirmed that the music within each stave reliably conveyed key structure on its own. It is suggested that when one key predominates in a polytonal context, other keys may not contribute to the overall perceived tonal structure. The influence of long-term knowledge and immediate context on the perception of tonal structure in polytonal music is discussed.  相似文献   

9.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

10.
Are emotions perceived automatically? Two psychological refractory period experiments were conducted to ascertain whether emotion perception requires central attentional resources. Task 1 required an auditory discrimination (tone vs. noise), whereas Task 2 required a discrimination between happy and angry faces. The difficulty of Task 2 was manipulated by varying the degree of emotional expression. The stimulus onset asynchrony (SOA) between Task 1 and Task 2 was also varied. Experiment 1 revealed additive effects of SOA and Task 2 emotion-perception difficulty. Experiment 2 replicated the additive relationship with a stronger manipulation of emotion-perception difficulty. According to locus-of-slack logic, our participants did not process emotional expressions while central resources were devoted to Task 1. We conclude that emotion perception is not fully automatic.  相似文献   

11.
Expression in musical performance is largely communicated by the manner in which a piece is played; interpretive aspects that supplement the written score. In piano performance, timing and amplitude are the principal parameters the performer can vary. We examined the way in which such variation serves to communicate emotion by manipulating timing and amplitude in performances of classical piano pieces. Over three experiments, listeners rated the emotional expressivity of performances and their manipulated versions. In Experiments 1 and 2, timing and amplitude information were covaried; judgments were monotonically decreasing with performance variability, demonstrating that the rank ordering of acoustical manipulations was captured by participants' responses. Further, participants' judgments formed an S-shaped (sigmoidal) function in which greater sensitivity was seen for musical manipulations in the middle of the range than at the extremes. In Experiment 3, timing and amplitude were manipulated independently; timing variation was found to provide more expressive information than did amplitude. Across all three experiments, listeners demonstrated sensitivity to the expressive cues we manipulated, with sensitivity increasing as a function of musical experience.  相似文献   

12.
Previous research has shown that, in a task requiring the detection of local deviations from mechanically precise timing in music, the relative detectability of deviations in different positions is closely related to the typical expressive timing pattern musicians produce when playing the music. This result suggests that listeners expect to hear music expressively timed and compensate for the absence of expressive timing. Three new detection experiments shed additional light on the nature of these timing expectations in musically trained listeners. Experiment 1 shows that repeated exposure to an atypically (but not unmusically) timed performance leaves listeners' timing expectations unaffected. Experiment 2 demonstrates that the expectations do not manifest themselves when listeners merely imagine the music in synchrony with a click track. Experiment 3, however, shows that the timing expectations are fully operational when the click track is superimposed on the music. These results reveal timing “expectations” to be an obligatory consequence of the ongoing auditory perception of musical structure. Received: 5 November 1996 / Accepted: 26 February 1997  相似文献   

13.
Effects of context on auditory stream segregation   总被引:1,自引:0,他引:1  
The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA- pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had 1 of 3 higher frequencies. Perception of 2 streams in the current trial increased with greater frequency separation between the A and B tones (Delta f). Larger Delta f in previous trials modified this pattern, causing less streaming in the current trial. This occurred even when listeners were asked to bias their perception toward hearing 1 stream or 2 streams. The effect of previous Delta f was not due to response bias because simply perceiving 2 streams in the previous trial did not cause less streaming in the current trial. Finally, the effect of previous ?f was diminished, though still present, when the silent duration between trials was increased to 5.76 s. The time course of this context effect on streaming implicates the involvement of auditory sensory memory or neural adaptation.  相似文献   

14.
The effects of harmony and rhythm on expectancy formation were studied in two experiments. In both studies, we generated musical passages consisting of a melodic line accompanied by four harmonic (chord) events. These sequences varied in their harmonic content, the rhythmic periodicity of the three context chords prior to the final chord, and the ending time of the final chord itself. In Experiment 1, listeners provided ratings for how well the final chord in a chord sequence fit their expectations for what was to come next; analyses revealed subtle changes in ratings as a function of both harmonic and rhythmic variation. Experiment 2 extended these results; listeners made a speeded reaction time judgment on whether the final chord of a sequence belonged with its set of context chords. Analysis of the reaction time data suggested that harmonic and rhythmic variation also influenced the speed of musical processing. These results are interpreted with reference to current models of music cognition, and they highlight the need for rhythmical weighting factors within the psychological representation of tonal/pitch information.  相似文献   

15.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

16.
Two experiments using identical stimuli were run to determine whether the vocal expression of emotion affects the speed with which listeners can identify emotion words. Sentences were spoken in an emotional tone of voice (Happy, Disgusted, or Petrified), or in a Neutral tone of voice. Participants made speeded lexical decisions about the word or pseudoword in sentence-final position. Critical stimuli were emotion words that were either semantically congruent or incongruent with the tone of voice of the sentence. Experiment 1, with randomised presentation of tone of voice, showed no effect of congruence or incongruence. Experiment 2, with blocked presentation of tone of voice, did show such effects: Reaction times for congruent trials were faster than those for baseline trials and incongruent trials. Results are discussed in terms of expectation (e.g., Kitayama, 1990, 1991, 1996) and emotional connotation, and implications for models of word recognition are considered.  相似文献   

17.
How do perceivers apply knowledge to instances they have never experienced before? On one hand, listeners might use idealized representations that do not contain specific details. On the other, they might recognize and process information based on more detailed memory representations. The current study examined the latter possibility with respect to musical meter perception, previously thought to be computed based on highly-idealized (isochronous) internal representations. In six experiments, listeners heard sets of metrically-ambiguous melodies. Each melody was played in a simultaneous musical context with unambiguous metrical cues (3/4 or 6/8). Cross-melody similarity was manipulated by pairing certain cues-timbre (musical instrument) and motif content (2-6-note patterns)-with each meter, or distributing cues across meters. After multiple exposures, listeners heard each melody without context, and judged metrical continuations (all Experiments) or familiarity (Experiments 5-6). Responses were assessed for "metrical restoration"-the tendency to make metrical judgments that fit the melody's previously-heard metrical context. Cross-melody similarity affected the presence and degree of metrical restoration, and timbre affected familiarity. Results suggest that metrical processing may be calculated based on fairly detailed representations rather than idealized isochronous pulses, and is dissociated somewhat from familiarity judgments. Implications for theories of meter perception are discussed.  相似文献   

18.
In three experiments, musically trained and untrained adults listened to three repetitions of a 5-note melodic sequence followed by a final melody with either the same tune as those preceding it or differing in one position by one semitone. In Experiment 1, ability to recognize the final sequence was examined as a function of redundancy at the levels of musical structure in a sequence, contour complexity of transpositions in a trial, and trial context in a session. Within a sequence, tones were related as the major or augmented triad; within a trial, the four sequences began on successively higher notes (simple macrocontour) or on randomly selected notes (complex macrocontour); and within a session, trials were either blocked (all major or all augmented) or mixed (major and augmented randomly selected). Performance was superior for major melodies, for systematic transpositions within a trial (simple macrocontours), for blocked trials, and for musically trained listeners. In Experiment 2, we examined further the effect of macrocontour. Performance on simple macrocontours exceeded that on complex, and excluded the possibility that repetition of the 20-note sequences provided the entire benefit of systematic transposition in Experiment 1. The effect of musical structure (major/augmented) was also replicated. In Experiment 3, listeners provided structure ratings of ascending 20-note sequences from Experiment 2. Ratings on same trials were higher than those on corresponding different trials, in contrast to performance scores for augmented same and different trials in previous experiments. The concept of functional uncertainty was proposed to account for recognition difficulties on augmented same trials. The significant effects of redundancy on all the levels examined confirm the utility of the information-processing framework for the study of melodic sequence perception.  相似文献   

19.
In a series of experiments, we examined age-related differences in adults' ability to order sequences of tones presented at various speeds and in contexts designed to promote or to impede stream segregation. In Experiment 1, 32 listeners (16 young, 16 old) were required to identify two repeating sequences that consisted of four tones (two from a high and two from a low frequency range) in different order. In Experiment 2, 32 listeners were required to judge whether the two recycled patterns from Experiment 1 were the same or different. In Experiment 3, four young and four old listeners were tested on the tasks of Experiment 2 over an extended period. In Experiment 4, 16 young and 16 old listeners were tested with sequences that were not recycled and were composed of tones drawn from a narrow frequency range. Elderly adults were less able than young adults to distinguish between tone sequences with contrasting order, regardless of the speed of presentation, the nature of the task (identification vs. same/different), the amount of practice, the frequency separation of the tones, or the presence or absence of recycling. These findings provide evidence of a temporal sequencing impairment in elderly listeners but reveal no indication of age differences in streaming processes.  相似文献   

20.
Perceptual interactions between musical pitch and timbre.   总被引:1,自引:0,他引:1  
These experiments examined perceptual interactions between musical pitch and timbre. Experiment 1, through the use of the Garner classification tasks, found that pitch and timbre of isolated tones interact. Classification times showed interference from uncorrelated variation in the irrelevant attribute and facilitation from correlated variation; the effects were symmetrical. Experiments 2 and 3 examined how musical pitch and timbre function in longer sequences. In recognition memory tasks, a target tone always appeared in a fixed position in the sequences, and listeners were instructed to attend to either its pitch or its timbre. For successive tones, no interactions between timbre and pitch were found. That is, changing the pitches of context tones did not affect timbre recognition, and vice versa. The tendency to perceive pitch in relation to other context pitches was strong and unaffected by whether timbre was constant or varying. In contrast, the relative perception of timbre was weak and was found only when pitch was constant. These results suggest that timbre is perceived more in absolute than in relative terms. Perceptual implications for creating patterns in music with timbre variations are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号