首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The goal of a series of listening tests was to better isolate the principal dimensions of timbre, using a wide range of timbres and converging psychophysical techniques. Expert musicians and nonmusicians rated the timbral similarity of three sets of pitched and percussive instruments. Multidimensional scaling analyses indicated that both centroid and rise time comprise the principal acoustic factors across all stimulus sets and that musicians and nonmusicians did not differ significantly in their weighting of these factors. Clustering analyses revealed that participants also categorized percussive and, to a much lesser extent, pitched timbres according to underlying physical-acoustic commonalties. The findings demonstrate that spectral centroid and rise time represent principal perceptual dimensions of timbre, independent of musical training, but that the tendency to group timbres according to source properties increases with acoustic complexity.  相似文献   

2.
The skill of recognizing musical structures   总被引:1,自引:0,他引:1  
In three experiments, musicians and nonmusicians were compared in their ability to discriminate musical chords. Pairs of chords sharing all notes in common or having different notes were played in succession. Some pairs of chords differed in timbre independent of their musical structures because they were played on different instruments. Musicians outperformed nonmusicians only in recognizing the same chord played on different instruments. Both groups could discriminate between instrument timbres, although musicians did slightly better than nonmusicians. In contrast, with chord structures not conforming to the rules of tonal harmony, musicians and nonmusicians performed equally poorly in recognizing identical chords played on different instruments. Signal detection analysis showed that musicians and nonmusicians set similar criteria for these judgments. Musicians' superiority reflects greater sensitivity to familiar diatonic chords. These results are taken as evidence that musicians develop perceptual and cognitive skills specific to the lawful musical structures encountered in their culture's music. Nonmusicians who lack this knowledge based their judgments on the acoustical properties of the chords.  相似文献   

3.
Neurocognitive studies have shown that extensive musical training enhances P3a and P3b event-related potentials for infrequent target sounds, which reflects stronger attention switching and stimulus evaluation in musicians than in nonmusicians. However, it is unknown whether the short-term plasticity of P3a and P3b responses is also enhanced in musicians. We compared the short-term plasticity of P3a and P3b responses to infrequent target sounds in musicians and nonmusicians during auditory perceptual learning tasks. Target sounds, deviating in location, pitch, and duration with three difficulty levels, were interspersed among frequently presented standard sounds in an oddball paradigm. We found that during passive exposure to sounds, musicians had habituation of the P3a, while nonmusicians showed enhancement of the P3a between blocks. Between active tasks, P3b amplitudes for duration deviants were reduced (habituated) in musicians only, and showed a more posterior scalp topography for habituation when compared to P3bs of nonmusicians. In both groups, the P3a and P3b latencies were shortened for deviating sounds. Also, musicians were better than nonmusicians at discriminating target deviants. Regardless of musical training, better discrimination was associated with higher working memory capacity. We concluded that music training enhances short-term P3a/P3b plasticity, indicating training-induced changes in attentional skills.  相似文献   

4.
An Auditory Ambiguity Test (AAT) was taken twice by nonmusicians, musical amateurs, and professional musicians. The AAT comprised different tone pairs, presented in both within-pair orders, in which overtone spectra rising in pitch were associated with missing fundamental frequencies (F0) falling in pitch, and vice versa. The F0 interval ranged from 2 to 9 semitones. The participants were instructed to decide whether the perceived pitch went up or down; no information was provided on the ambiguity of the stimuli. The majority of professionals classified the pitch changes according to F0, even at the smallest interval. By contrast, most nonmusicians classified according to the overtone spectra, except in the case of the largest interval. Amateurs ranged in between. A plausible explanation for the systematic group differences is that musical practice systematically shifted the perceptual focus from spectral toward missing-F0 pitch, although alternative explanations such as different genetic dispositions of musicians and nonmusicians cannot be ruled out. ((c) 2007 APA, all rights reserved).  相似文献   

5.
Hand skill asymmetry on two handedness tasks was examined in consistent right-handed musicians and nonmusicians as well as mixed-handed and consistent left-handed nonmusicians. Musicians, although demonstrating right-hand superiority, revealed a lesser degree of hand skill asymmetry than consistent right-handed nonmusicians. Increased left-hand skill in musicians accounted for their reduced asymmetry. Musicians predominantly playing keyboard instruments demonstrated superior tapping performance than musicians playing predominantly string instruments, although they did not differ with respect to hand skill asymmetry. Since the diminished tapping asymmetry in musicians was related to early commencement but not duration of musical training, results are interpreted as an adaptation process due to performance requirements interacting with cerebral maturation during childhood.  相似文献   

6.
Singing is a cultural universal and an important part of modern society, yet many people fail to sing in tune. Many possible causes have been posited to explain poor singing abilities; foremost among these are poor perceptual ability, poor motor control, and sensorimotor mapping errors. To help discriminate between these causes of poor singing, we conducted 5 experiments testing musicians and nonmusicians in pitch matching and judgment tasks. Experiment 1 introduces a new instrument called a slider, on which participants can match pitches without using their voice. Pitch matching on the slider can be directly compared with vocal pitch matching, and results showed that both musicians and nonmusicians were more accurate using the slider than their voices to match target pitches, arguing against a perceptual explanation of singing deficits. Experiment 2 added a self-matching condition and showed that nonmusicians were better at matching their own voice than a synthesized voice timbre, but were still not as accurate as on the slider. This suggests a timbral translation type of mapping error. Experiments 3 and 4 demonstrated that singers do not improve over multiple sung responses, or with the aid of a visual representation of pitch. Experiment 5 showed that listeners were more accurate at perceiving the pitch of the synthesized tones than actual voice tones. The pattern of results across experiments demonstrates multiple possible causes of poor singing, and attributes most of the problem to poor motor control and timbral-translation errors, rather than a purely perceptual deficit, as other studies have suggested.  相似文献   

7.
Three experiments with musicians and nonmusicians (N=338) explored variations of Deutsch’s musical scale illusion. Conditions under which the illusion occurs were elucidated and data obtained which supported Bregman’s suggestion that auditory streaming results from a competition among alternative perceptual organizations. In nExperiment 1, a series of studies showed that it is more difficult to induce the scale illusion than might be expected if it is accepted that an illusion will be present for most observers despite minor changes in stimuli and experimental conditions, The stimulus sequence seems better described as an ambiguous figure. Having discovered conditions under which the scale illusion could be reliably induced, Experiments 2 and 3 manipulated additional properties of the stimulus (timbre, loudness, and tune) to provide cues to streaming other than pitch and location. The data showed that streaming of this sequence can be altered by these properties, supporting the notion of a general parsing mechanism which follows general gestalt principles and allows streaming by many stimulus dimensions. Finally, suggestions are made as to how this mechanism might operate.  相似文献   

8.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

9.
This study examines the effect of musical experience and family handedness background on the categorization of musical intervals (two-note chords). Right-handed subjects, who were divided into four groups on the basis of musical training and presence (or absence) of left-handed family members, categorized musical intervals which were monaurally presented to left or right ear. The results, based on consistency and discreteness of categorization, showed: (1) Musicians' performance is superior to nonmusicians'; (2) musicians and nonmusicians differ significantly on their ear of preference; (3) family handedness background significantly affects ear of preference among musicians but not among nonmusicians.  相似文献   

10.
A matching paradigm was used to evaluate the influence of the spectral characteristics number, relative height, and density of harmonics on the perceptibility of the missing fundamental. Fifty-eight musicians and 58 nonmusicians were instructed to adjust mistuned sinusoids to the subjectively perceived fundamental pitches of corresponding overtone spectra. Analyses of variance were used to compare the average of absolute and relative deviations of the tunings from the highest common divisors of the complex tones. The results indicate that musical experience is the most influential single factor determining the assessment of fundamental pitch. Nevertheless, all spectral parameters significantly affect tuning performance. Systematic relative deviations (stretching/compression effects) were observed for all considered variables. An increase of the optimum subjective distance between an overtone spectrum and its corresponding fundamental was characteristic of musicians and unambiguous spectra, whereas the compression effect was typical of nonmusicians and complex tones containing spectral gaps.  相似文献   

11.
Two experiments demonstrated the way in which musicians and nonmusicians process realistic music encountered for the first time. A set of tunes whose members were related to each other by a number of specific musical relationships was constructed. In Experiment 1, subjects gave similarity judgments of all pairs of tunes, which were analyzed by the ADDTREE clustering program. Musicians and nonmusicians gave essentially equivalent results: Tunes with different rhythms were rated as being very dissimilar, whereas tunes identical except for being in a major versus a minor mode were rated as being highly similar. In Experiment 2, subjects learned to identify the tunes, and their errors formed a confusion matrix. The matrix was submitted to a clustering analysis. Results from the two experiments corresponded better for the nonmusicians than for the musicians. Musicians presumably exceed nonmusicians in the ability to categorize music in multiple ways, but even nonmusicians extract considerable information from newly heard music.  相似文献   

12.
The aim of this study was to identify the psycho-musical factors that govern time evaluation in Western music from baroque, classic, romantic, and modern repertoires. The excerpts were previously found to represent variability in musical properties and to induce four main categories of emotions. 48 participants (musicians and nonmusicians) freely listened to 16 musical excerpts (lasting 20 sec. each) and grouped those that seemed to have the same duration. Then, participants associated each group of excerpts to one of a set of sine wave tones varying in duration from 16 to 24 sec. Multidimensional scaling analysis generated a two-dimensional solution for these time judgments. Musical excerpts with high arousal produced an overestimation of time, and affective valence had little influence on time perception. The duration was also overestimated when tempo and loudness were higher, and to a lesser extent, timbre density. In contrast, musical tension had little influence.  相似文献   

13.
Several studies have demonstrated that faces are processed differently from other types of objects, implicating a special role that faces have within the human visual system. However, other studies have suggested that faces may be special only in that they constitute a highly familiar category of visual objects with which most humans have expertise. In this study, we tested a group of expert musicians with a musical instrument classification task during which irrelevant images of musical instruments were presented as visual distractors under varying conditions of perceptual load. Unlike nonmusicians (who had been tested in Lavie, Ro, & Russell, 2003, using the same paradigm as in the present study), the musicians processed these irrelevant images of musical instruments even under conditions of high perceptual load. These results suggest that musical instruments are processed automatically and without capacity limits in subjects with musical expertise and implicate a specialized processing mechanism for objects of high familiarity.  相似文献   

14.
Intensive training and the acquisition of expertise are known to bring about structural changes in the brain. Musical training is a particularly interesting model. Previous studies have reported structural brain modifications in the auditory, motor and visuospatial areas of musicians compared with nonmusicians. The main goal of the present study was to go one step further, by exploring the dynamic of those structural brain changes related to musical experience. To this end, we conducted a regression study on 44 nonmusicians and amateur musicians with 0–26 years of musical practice of a variety instruments. We sought first to highlight brain areas that increased with the duration of practice and secondly distinguish (thanks to an ANOVA analysis) brain areas that undergo grey matter changes after only limited years of musical practice from those that require longer practice before they exhibit changes. Results revealed that musical training results a greater grey matter volumes in different brain areas for musicians. Changes appear gradually in the left hippocampus and right middle and superior frontal regions, but later also include the right insula and supplementary motor area and left superior temporal, and posterior cingulate areas. Given that all participants had the same age and that we controlled for age and education level, these results cannot be ascribed to normal brain maturation. Instead, they support the notion that musical training could induce dynamic structural changes.  相似文献   

15.
It has been proposed that time, space, and numbers may be computed by a common magnitude system. Even though several behavioural and neuroanatomical studies have focused on this topic, the debate is still open. To date, nobody has used the individual differences for one of these domains to investigate the existence of a shared cognitive system. Musicians are known to outperform nonmusicians in temporal discrimination tasks. We therefore observed professional musicians and nonmusicians undertaking three different tasks: temporal (participants were required to estimate which of two tones lasted longer), spatial (which line was longer), and numerical discrimination (which group of dots was more numerous). If time, space, and numbers are processed by the same mechanism, it is expected that musicians will have a greater ability, even in nontemporal dimensions. As expected, musicians were more accurate with regard to temporal discrimination. They also gave better performances in both the spatial and the numerical tasks, but only outside the subitizing range. Our data are in accordance with the existence of a common magnitude system. We suggest, however, that this mechanism may not involve the whole numerical range.  相似文献   

16.
Arousal and valence (pleasantness) are considered primary dimensions of emotion. However, the degree to which these dimensions interact in emotional processing across sensory modalities is poorly understood. We addressed this issue by applying a crossmodal priming paradigm in which auditory primes (Romantic piano solo music) varying in arousal and/or pleasantness were sequentially paired with visual targets (IAPS pictures). In Experiment 1, the emotion spaces of 120 primes and 120 targets were explored separately in addition to the effects of musical training and gender. Thirty-two participants rated their felt pleasantness and arousal in response to primes and targets on equivalent rating scales as well as their familiarity with the stimuli. Musical training was associated with elevated familiarity ratings for high-arousing music and a trend for elevated arousal ratings, especially in response to unpleasant musical stimuli. Males reported higher arousal than females for pleasant visual stimuli. In Experiment 2, 40 nonmusicians rated their felt arousal and pleasantness in response to 20 visual targets after listening to 80 musical primes. Arousal associated with the musical primes modulated felt arousal in response to visual targets, yet no such transfer of pleasantness was observed between the two modalities. Experiment 3 sought to rule out the possibility of any order effect of the subjective ratings, and responses of 14 nonmusicians replicated results of Experiment 2. This study demonstrates the effectiveness of the crossmodal priming paradigm in basic research on musical emotions.  相似文献   

17.
This study evaluated the relationship between primitive and scheme-driven grouping (A. S. Bregman, 1990) by comparing the ability of different listeners to detect single note changes in 3-voice musical compositions. Primitive grouping was manipulated by the use of 2 distinctly different compositional styles (homophony and polyphony). The effects of scheme-driven processes were tested by comparing performance of 2 groups of listeners (musicians and nonmusicians) and by varying task demands (integrative and selective listening). Following previous studies, which had tested only musically trained participants, several variables were manipulated within each compositional style. The results indicated that, although musicians demonstrated a higher sensitivity to changes than did nonmusicians, the 2 groups exhibited similar patterns of sensitivity under a variety of conditions.  相似文献   

18.
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.  相似文献   

19.
Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent — pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.  相似文献   

20.
Current research has suggested that musical stimuli are processed in the right hemisphere except in musicians, in whom there is an increased involvement of the left hemisphere. The present study hypothesized that the more musical training persons receive, the more they will rely on an analytic/left-hemispheric processing strategy. The subjects were 10 faculty and 10 student musicians, and 10 faculty and 10 student nonmusicians. All subjects listened to a series of melodies (some recurring and some not) and excerpts (some actual and some not) in one ear and, after a rest, to a different series of melodies in the other ear. The task was to identify recurring vs. nonrecurring melodies and actual vs. nonactual excerpts. For student musicians, there was a right-ear/left-hemispheric advantage for melody recognition, while for student nonmusicians, the situation was the reverse. Neither faculty group showed any ear preference. There were no significant differences for excerpt recognition. Two possible explanations of the faculty performance were discussed in terms of physical maturation and a functionally more integrated hemispheric approach to the task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号