首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The effects of harmony and rhythm on expectancy formation were studied in two experiments. In both studies, we generated musical passages consisting of a melodic line accompanied by four harmonic (chord) events. These sequences varied in their harmonic content, the rhythmic periodicity of the three context chords prior to the final chord, and the ending time of the final chord itself. In Experiment 1, listeners provided ratings for how well the final chord in a chord sequence fit their expectations for what was to come next; analyses revealed subtle changes in ratings as a function of both harmonic and rhythmic variation. Experiment 2 extended these results; listeners made a speeded reaction time judgment on whether the final chord of a sequence belonged with its set of context chords. Analysis of the reaction time data suggested that harmonic and rhythmic variation also influenced the speed of musical processing. These results are interpreted with reference to current models of music cognition, and they highlight the need for rhythmical weighting factors within the psychological representation of tonal/pitch information.  相似文献   

2.
Two experiments addressed the influences of harmonic relations, melody location, and relative frequency height on the perceptual organization of multivoiced music. In Experiment 1, listeners detected pitch changes in multivoiced piano music. Harmonically related pitch changes and those in the middle-frequency range were least noticeable. All pitch changes were noticeable in the high-frequency voice containing the melody (the most important voice), suggesting that melody can dominate harmonic relations. However, the presence of upper partials in the piano timbre used may have accounted for the harmonic effects. Experiment 2 employed pure sine tones, and replicated the effects of Experiment 1. In addition, the influence of the high-frequency melody on the noticeability of harmonically related pitches was lessened by the presence of a second melody. These findings suggest that harmonic, melodic, and relative frequency height relationships among voices interact in the perceptual organization of multivoiced music.  相似文献   

3.
Listeners judged whether a target tone was contained within a previously or subsequently presented major chord, and targets consisted of either the root, third, fifth, or tritone of the scale based on the root of the chords. Chord position influenced the relative recognition of targets, but listeners exhibited greater recognition of the fifth regardless of chord position (root, first inversion, second inversion). The data were not consistent with notions of root tracking or melody tracking. The data were broadly consistent with the notion that different chord positions may be harmonically equivalent (i.e., that listeners may recognize components of a chord regardless of chord position), with notions of analytic set and the importance of an instantiation of musical context for chord processing, and with the importance of the fifth in harmonic progression.  相似文献   

4.
We investigated the effects of selective attention and musical training on the processing ofharmonic expectations. In Experiment 1, participants with and without musical training were required to respond to the contour of melodies as they were presented with chord progressions that were highly expected, slightly unexpected, or extremely unexpected. Reaction time and accuracy results showed that when attention was focused on the melody, musically trained participants were still sensitive to different harmonic expectations, whereas participants with no musical training were undifferentiated across expectation conditions. In Experiment 2, participants were required to listen holistically to the entire chord progression and to rate their preference for each chord progression. Results from preference ratings showed that all the participants, with or without musical training, were sensitive to manipulations of harmonic expectations. Experiments 3 and 4 showed that changing the speed of presentation of chord progressions did not affect the pattern of results. The four experiments together highlight the importance of attentional focus in musical training, especially as it relates to the processing of harmonic expectations.  相似文献   

5.
The processing of chords is facilitated when they are harmonically related to the context in which they appear. The purpose of this study was to assess whether this harmonic priming effect depends on the version (normal vs. scrambled) of the context chord sequences. Normal sequences were scrambled by permuting chords two-by-two (Experiment 1) or four-by-four (Experiments 2 and 3). Normal chord sequences were judged less coherent than scrambled sequences. However, normal chord sequences showed facilitation for harmonically related rather than for unrelated targets, and this effect of relatedness did not diminish for scrambled sequences (Experiments 1-3). The data of musicians and nonmusicians were interpreted with Bharucha's (1987) spreading activation framework. Simulations suggested that harmonic priming results from activation that spreads via schematic knowledge of Western harmony and accumulates in short-term memory over the course of the chord sequence.  相似文献   

6.
Two experiments are reported in which participants attempted to reject the tape‐recorded voice of a stranger and identify by name the voices of three personal associates who differed in their level of familiarity. In Experiment 1 listeners were asked to identify speakers as soon as possible, but were not allowed to change their responses once made. In Experiment 2 listeners were permitted to change their responses over successive presentations of increasing durations of voice segments. Also, in Experiment 2 half of the listeners attempted to identify speakers who spoke in normal‐tone voices, and the remainder attempted to identify the same speakers who spoke in whispers. Separate groups of undergraduate students attempted to predict the performance of the listeners in both experiments. Accuracy of performance depended on the familiarity of speakers and tone of speech. A between‐subjects analysis of rated confidence was diagnostic of accuracy for high familiar and low familiar speakers (Experiment 1), and for moderate familiar and unfamiliar normal‐tone speakers (Experiment 2). A modified between‐subjects analysis assessed across the four levels of familiarity yielded reliable accuracy‐confidence correlations in both experiments. Beliefs about the accuracy of voice identification were inflated relative to the significantly lower actual performance for most of the normal‐tone and whispered‐speech conditions. Forensic significance and generalizations are addressed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
Four experiments examined the effects of language characteristics on voice identification. In Experiment 1, monolingual English listeners identified bilinguals' voices much better when they spoke English than when they spoke German. The opposite outcome was found in Experiment 2, in which the listeners were monolingual in German. In Experiment 3, monolingual English listeners also showed better voice identification when bilinguals spoke a familiar language (English) than when they spoke an unfamiliar one (Spanish). However, English-Spanish bilinguals hearing the same voices showed a different pattern, with the English-Spanish difference being statistically eliminated. Finally, Experiment 4 demonstrated that, for English-dominant listeners, voice recognition deteriorates systematically as the passage being spoken is made less similar to English by rearranging words, rearranging syllables, and reversing normal text. Taken together, the four experiments confirm that language familiarity plays an important role in voice identification.  相似文献   

8.
A melody’s identity is determined by relations between consecutive tones in terms of pitch and duration, whereas surface features (i.e., pitch level or key, tempo, and timbre) are irrelevant. Although surface features of highly familiar recordings are encoded into memory, little is known about listeners’ mental representations of melodies heard once or twice. It is also unknown whether musical pitch is represented additively or interactively with temporal information. In two experiments, listeners heard unfamiliar melodies twice in an initial exposure phase. In a subsequent test phase, they heard the same (old) melodies interspersed with new melodies. Some of the old melodies were shifted in key, tempo, or key and tempo. Listeners’ task was to rate how well they recognized each melody from the exposure phase while ignoring changes in key and tempo. Recognition ratings were higher for old melodies that stayed the same compared to those that were shifted in key or tempo, and detrimental effects of key and tempo changes were additive in between-subjects (Experiment 1) and within-subjects (Experiment 2) designs. The results confirm that surface features are remembered for melodies heard only twice. They also imply that key and tempo are processed and stored independently.  相似文献   

9.
The effects of global harmonic contexts on expectancy formation were studied in a set of three experiments. Eight-chord sequences were presented to subjects. Expectations for the last chord were varied by manipulating the harmonic context created by the first six: in one context, the last chord was part of an authentic cadence (V–I), whereas in the other, it was a fourth harmonic degree following a full cadence (I–IV). Given this change in harmonic function, the last chord was assumed to be more expected in the former context, all the other local parameters being held constant. The effect of global context on expectancy formation was supported by the fact that subjects reported a lower degree of completion for sequences ending on an unexpected chord (Experiment 1), took longer to decide whether the last chord belonged to the sequence when the last chord was unexpected (Experiment 2), and took longer to decide whether the last chord was consonant or dissonant when it was unexpected (Experiment 3). These results are discussed with reference to current models of tonal cognition.  相似文献   

10.
Summary In a probe-tone experiment, two groups of listeners — one trained, the other untrained, in traditional music theory — rated the goodness of fit of each of the 12 notes of the chromatic scale to four-voice harmonic sequences. Sequences were 12 simplified excerpts from Bach chorales, 4 nonmodulating, and 8 modulating. Modulations occurred either one or two steps in either the clockwise or the counterclockwise direction on the cycle of fifths. A consistent pattern of probe-tone ratings was obtained for each sequence, with no significant differences between listener groups. Two methods of analysis (Fourier analysis and regression analysis) revealed a directional asymmetry in the perceived key movement conveyed by modulating sequences. For a given modulation distance, modulations in the counterclockwise direction effected a clearer shift in tonal organization toward the final key than did clockwise modulations. The nature of the directional asymmetry was consistent with results reported for identification and rating of key change in the sequences (Thompson & Cuddy, 1989 a). Further, according to the multiple-regression analysis, probe-tone ratings did not merely reflect the distribution of tones in the sequence. Rather, ratings were sensitive to the temporal structure of the tonal organization in the sequence.  相似文献   

11.
The present study was designed to examine age differences in the ability to use voice information acquired intentionally (Experiment 1) or incidentally (Experiment 2) as an aid to spoken word identification. Following both implicit and explicit voice learning, participants were asked to identify novel words spoken either by familiar talkers (ones they had been exposed to in the training phase) or by 4 unfamiliar voices. In both experiments, explicit memory for talkers' voices was significantly lower in older than in young listeners. Despite this age-related decline in voice recognition, however, older adults exhibited equivalent, and in some cases greater, benefit than young listeners from having words spoken by familiar talkers. Implications of the findings for age-related changes in explicit versus implicit memory systems are discussed.  相似文献   

12.
The present study reexamines the hypothesis that there exist emotional attributions specific to simple musical elements. In Experiment 1, groups of participants, with varying musical expertise, rated the emotional meaning of four natural intervals heard as two harmonic sine waves. In Experiment 2, the higher tone was kept constant at an octave above the low tone used in Experiment 1, while the lower tone was constant. Attributions for each interval were positively correlated from one experimental session to another; despite the intervals differed in terms of their component pitches. Musicians gave the most reliable choices of meaning. In a third experiment, participants rated the emotional meaning of various unfamiliar ethnic melodies with expressions describing the intervals’ meaning based on the results of Experiment 1 and 2. There were distinct profiles of emotional meanings for each melody and these coincided with the meaning of intervals that constituted the surface and deep structure of each melody. The intervallic structures (i.e., the main intervals of the tunes) and respective chords for each melody were also presented aurally and participants’ ratings showed similar emotional profiles for these when compared to those of the melodies themselves.  相似文献   

13.
Responsiveness of musically trained and untrained adults to pitch-distributional information in melodic contexts was assessed. In Experiment 1, melodic contexts were pure-tone sequences, generated from either a diatonic or one of four nondiatonic tonesets, in which pitch-distributional information was manipulated by variation of the relative frequency of occurrence of tones from the toneset. Both the assignment of relative frequency of occurrence to tones and the construction of the (fixed) temporal order of tones within the sequences contravened the conventions of western tonal music. A probe-tone technique was employed. Each presentation of a sequence was followed by a probe tone, one of the 12 chromatic notes within the octave. Listeners rated the goodness of musical fit of the probe tone to the sequence. Probe-tone ratings were significantly related to frequency of occurrence of the probe tone in the sequence for both trained and untrained listeners. In addition, probe-tone ratings decreased as the pitch distance between the probe tone and the final tone of the sequence increased. For musically trained listeners, probe-tone ratings for diatonic sequences tended also to reflect the influence of an internalized tonal schema. Experiment 2 demonstrated that the temporal location of tones in the sequences could not alone account for the effect of frequency of occurrence in Experiment 1. Experiment 3 tested musically untrained listeners under the conditions of Experiment 1, with the exception that the temporal order of tones in each sequence was randomized across trials. The effect of frequency of occurrence found in Experiment 1 was replicated and strengthened.  相似文献   

14.
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within‐person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ‘tell people apart’ (perceiving exemplars from two different speakers as separate identities) but also ‘tell people together’ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within‐person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in ‘telling people together’. Our study highlights within‐person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re‐evaluation of theoretical models to account for natural variability during identity perception.  相似文献   

15.
In four experiments, we examined the effects of exposure to unfamiliar tone sequences on melodic expectancy and memory. In Experiment 1, 30 unfamiliar tone sequences (target sequences) were presented to listeners three times each in random order (exposure phase), and listeners recorded the number of notes in each sequence. Listeners were then presented target and novel sequences and rated how well the final note continued the pattern of notes that preceded it. Novel sequences were identical to target sequences, except for the final note. Ratings were significantly higher for target sequences than for novel sequences, illustrating the influence of exposure on melodic expectancy. Experiment 2 confirmed that without exposure to target sequences, ratings were equivalent for target and novel sequences. In Experiment 3, new listeners were assessed for explicit memory for target sequences following the exposure phase. Recognition of target sequences was above chance, but unrelated to expectancy judgments in Experiment 1. Experiment 4 replicated the exposure effect, using a modified experiment design, and confirmed that the effect is not dependent on explicit memory for sequences. We discuss the idea that melodic expectancies are influenced by implicit memory for recently heard melodic patterns.  相似文献   

16.
Summary Four experiments investigated the perception of tonal structure in polytonal music. The experiments used musical excerpts in which the upper stave of the music suggested a different key than the lower stave. In Experiment 1, listeners rated the goodness of fit of probe tones following an excerpt from Dubois's Circus. Results suggested that listeners were sensitive to two keys, and weighted them according to their perceived importance within the excerpt. Experiment 2 confirmed that music within each stave reliably conveyed key structure on its own. In Experiment 3, listeners rated probe tones following an excerpt from Milhaud's Sonata No. 1 for Piano, in which different keys were conveyed in widely separate pitch registers. Ratings were collected across three octaves. Listeners did not associate each key with a specific register. Rather, ratings for all three octave registers reflected only the key associated with the upper stave. Experiment 4 confirmed that the music within each stave reliably conveyed key structure on its own. It is suggested that when one key predominates in a polytonal context, other keys may not contribute to the overall perceived tonal structure. The influence of long-term knowledge and immediate context on the perception of tonal structure in polytonal music is discussed.  相似文献   

17.
From only a single spoken word, listeners can form a wealth of first impressions of a person’s character traits and personality based on their voice. However, due to the substantial within-person variability in voices, these trait judgements are likely to be highly stimulus-dependent for unfamiliar voices: The same person may sound very trustworthy in one recording but less trustworthy in another. How trait judgements differ when listeners are familiar with a voice is unclear: Are listeners who are familiar with the voices as susceptible to the effects of within-person variability? Does the semantic knowledge listeners have about a familiar person influence their judgements? In the current study, we tested the effect of familiarity on listeners’ trait judgements from variable voices across 3 experiments. Using a between-subjects design, we contrasted trait judgements by listeners who were familiar with a set of voices – either through laboratory-based training or through watching a TV show – with listeners who were unfamiliar with the voices. We predicted that familiarity with the voices would reduce variability in trait judgements for variable voice recordings from the same identity (cf. Mileva, Kramer & Burton, Perception, 48, 471 and 2019, for faces). However, across the 3 studies and two types of measures to assess variability, we found no compelling evidence to suggest that trait impressions were systematically affected by familiarity.  相似文献   

18.
Past research has identified an event-related potential (ERP) marker for vocal emotional encoding and has highlighted vocal-processing differences between male and female listeners. We further investigated this ERP vocal-encoding effect in order to determine whether it predicts voice-related changes in listeners’ memory for verbal interaction content. Additionally, we explored whether sex differences in vocal processing would affect such changes. To these ends, we presented participants with a series of neutral words spoken with a neutral or a sad voice. The participants subsequently encountered these words, together with new words, in a visual word recognition test. In addition to making old/new decisions, the participants rated the emotional valence of each test word. During the encoding of spoken words, sad voices elicited a greater P200 in the ERP than did neutral voices. While the P200 effect was unrelated to a subsequent recognition advantage for test words previously heard with a neutral as compared to a sad voice, the P200 did significantly predict differences between these words in a concurrent late positive ERP component. Additionally, the P200 effect predicted voice-related changes in word valence. As compared to words studied with a neutral voice, words studied with a sad voice were rated more negatively, and this rating difference was larger, the larger the P200 encoding effect was. While some of these results were comparable in male and female participants, the latter group showed a stronger P200 encoding effect and qualitatively different ERP responses during word retrieval. Estrogen measurements suggested the possibility that these sex differences have a genetic basis.  相似文献   

19.
Tonal structure is musical organization on the basis of pitch, in which pitches vary in importance and rate of occurrence according to their relationship to a tonal center. Experiment 1 evaluated the maximum key-profile correlation (MKC), a product of Krumhansl and Schmuckler’s key-finding algorithm (Krumhansl, 1990), as a measure of tonal structure. The MKC is the maximum correlation coefficient between the pitch class distribution in a musical sample and key profiles,which indicate the stability of pitches with respect to particular tonal centers. The MKC values of melodies correlated strongly with listeners’ ratings of tonal structure. To measure the influence of the temporal order of pitches on perceived tonal structure, three measures (fifth span, semitone span, and pitch contour) taken from previous studies of melody perception were also correlated with tonal structure ratings. None of the temporal measures correlated as strongly or as consistently with tonal structure ratings as did the MKC, and nor did combining them with the MKC improve prediction of tonal structure ratings. In Experiment 2, the MKC did not correlate with recognition memory of melodies. However, melodies with very low MKC values were recognized less accurately than melodies with very high MKC values. Although it does not incorporate temporal, rhythmic, or harmonic factors that may influence perceived tonal structure, the MKC can be interpreted as a measure of tonal structure, at least for brief melodies.  相似文献   

20.
Results were combined from 5 experiments by Cook in 1998 with 728 participants who listened to one male voice and one female voice each saying a sentence, then attempted to recognise the voices from line-ups of six voices presented a week later. While 352 male listeners did not differ significantly in recognising female and male voices (38% correct vs 41%), 376 female listeners were significantly more likely to recognise female than male voices (51% vs 43% correct). There was no evidence for individual differences in voice recognition in that listeners who recognised the male voice were no more likely to recognise the female voice than those who failed to recognise the male voice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号