首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Two experiments demonstrate positional variation in the relative detectability of, respectively, local temporal and dynamic perturbations in an isochronous and isodynamic sequence of melody tones, played on a computer-controlled piano. This variation may reflect listeners’ expectations of expressive performance microstructure (thetop-down hypothesis), or it may be due to psychoacoustic (pitch-related) stimulus factors (thebottom-up hypothesis). Percent correct scores for increments in tone duration correlated significantly with the average timing profile of pianists’ expressive performances of the music, as predicted specifically by the top-down hypothesis. For intensity increments, the analogous perception-performance correlation was weak and the bottom-up factors of relative pitch height and/or direction of pitch change accounted for some of the perceptual variation. Subjects’ musical training increased overall detection accuracy but did not affect the positional variation in accuracy scores in either experiment. These results are consistent with the top-down hypothesis for timing, but they favor the bottom-up hypothesis for dynamics. The perception-performance correlation for timing may also be viewed as being due to complex stimulus properties such as tonal motion and tension/relaxation that influence performers and listeners in similar ways.  相似文献   

2.
While there is no universally accepted cause of psychopathy, there are basic biological patterns in brain dysfunction observed in individuals who display psychopathic tendencies. These individuals show significant impairment in specific regions of the brain, particularly the orbital frontal cortex (OFC). Such abnormalities exist in brain areas most involved in impulse control and behavior inhibition. There are also significant environmental factors that the majority of these individuals have in common. For example, a strong correlation exists between attachment disorder and anti-social personality disorder (ASPD). Finally, the differences between ASPD, psychopathy, and sociopathy are considered. While these terms are often used interchangeably, there are clear differences between these psychopathologies.  相似文献   

3.
Attention-deficit/hyperactivity disorder (ADHD) is a developmental disorder which effects an estimated 3% to 5% of children. Despite estimates that ADHD persists in 30% to 70% of adults having had the disorder in childhood, ADHD in adulthood remains controversial. This report summarizes current thinking in the diagnosis and etiology of adult ADHD. Most theories posit that ADHD is related to anomalies in frontal lobe function and dopaminergic transmission. However, there is debate as to whether ADHD is a unitary disorder with different manifestations, a syndrome, or multiple disorders. The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, classifies ADHD into inattention, hyperactivity-impulsivity, and combined subtypes. Although problems with cognition are core ADHD symptoms, self-reporting has not been a reliable predictor of neuropsychological test performance. Nevertheless, we suggest that a performance-based diagnosis, including empirically derived, age-sensitive neuropsychological tests, provides the best hope of dissociating ADHD from psychiatric disorders with similar symptoms. We also describe the promise of new neuroimaging technologies, such as functional magnetic resonance imaging, in elucidating the pathophysiology of ADHD and similar psychiatric disorders.  相似文献   

4.
Patel AD  Daniele JR 《Cognition》2003,87(1):B35-B45
Musicologists and linguists have often suggested that the prosody of a culture's spoken language can influence the structure of its instrumental music. However, empirical data supporting this idea have been lacking. This has been partly due to the difficulty of developing and applying comparable quantitative measures to melody and rhythm in speech and music. This study uses a recently-developed measure for the study of speech rhythm to compare rhythmic patterns in English and French language and classical music. We find that English and French musical themes are significantly different in this measure of rhythm, which also differentiates the rhythm of spoken English and French. Thus, there is an empirical basis for the claim that spoken prosody leaves an imprint on the music of a culture.  相似文献   

5.
ABSTRACT

Individuals tend to rate themselves more positively than strangers or acquaintances—a self-enhancement effect. But such self-enhancement is potentially detrimental to one's intimate relationships. We hypothesized that higher relationship quality would predict (1) partner-enhancement (i.e., rating the partner more positively than the self) and (2) higher feelings of being understood and validated (FUV). In addition, (3) partner-enhancement would add to relationship quality's prediction of FUV. These hypotheses were tested among cross-sex friendships (N = 92) and dating relationships (N = 90) in University students and in a married, non-University sample (N = 94). All hypotheses were supported in romantic relationships. For cross-sex friendships, regardless of relationship quality, individuals partner-enhanced on the negative traits but neither self- nor partner-enhanced on the positive traits. Finally, relationship quality predicted partner-perceptions more strongly than self-perceptions.  相似文献   

6.
Infants can detect information specifying affect in infant- and adult-directed speech, familiar and unfamiliar facial expressions, and in point-light displays of facial expressions. We examined 3-, 5-, 7-, and 9-month-olds' discrimination of musical excerpts judged by adults and preschoolers as happy and sad. In Experiment 1, using an infant-controlled habituation procedure, 3-, 5-, 7-, and 9-month-olds heard three musical excerpts that were rated as either happy or sad. Following habituation, infants were presented with two new musical excerpts from the other affect group. Nine-month-olds discriminated the musical excerpts rated as affectively different. Five- and seven-month-olds discriminated the happy and sad excerpts when they were habituated to sad excerpts but not when they were habituated to happy excerpts. Three-month-olds showed no evidence of discriminating the sad and happy excerpts. In Experiment 2, 5-, 7-, and 9-month-olds were presented with two new musical excerpts from the same affective group as the habituation excerpts. At no age did infants discriminate these novel, yet affectively similar, musical excerpts. In Experiment 3, we examined 5-, 7-, and 9-month-olds' discrimination of individual excerpts rated as affectively similar. Only the 9-month-olds discriminated the affectively similar individual excerpts. Results are discussed in terms of infants' ability to discriminate affect across a variety of events and its relevance for later social-communicative development.  相似文献   

7.
8.
9.
Fitch WT 《Cognition》2006,100(1):173-215
Studies of the biology of music (as of language) are highly interdisciplinary and demand the integration of diverse strands of evidence. In this paper, I present a comparative perspective on the biology and evolution of music, stressing the value of comparisons both with human language, and with those animal communication systems traditionally termed "song". A comparison of the "design features" of music with those of language reveals substantial overlap, along with some important differences. Most of these differences appear to stem from semantic, rather than structural, factors, suggesting a shared formal core of music and language. I next review various animal communication systems that appear related to human music, either by analogy (bird and whale "song") or potential homology (great ape bimanual drumming). A crucial comparative distinction is between learned, complex signals (like language, music and birdsong) and unlearned signals (like laughter, ape calls, or bird calls). While human vocalizations clearly build upon an acoustic and emotional foundation shared with other primates and mammals, vocal learning has evolved independently in our species since our divergence with chimpanzees. The convergent evolution of vocal learning in other species offers a powerful window into psychological and neural constraints influencing the evolution of complex signaling systems (including both song and speech), while ape drumming presents a fascinating potential homology with human instrumental music. I next discuss the archeological data relevant to music evolution, concluding on the basis of prehistoric bone flutes that instrumental music is at least 40,000 years old, and perhaps much older. I end with a brief review of adaptive functions proposed for music, concluding that no one selective force (e.g., sexual selection) is adequate to explaining all aspects of human music. I suggest that questions about the past function of music are unlikely to be answered definitively and are thus a poor choice as a research focus for biomusicology. In contrast, a comparative approach to music promises rich dividends for our future understanding of the biology and evolution of music.  相似文献   

10.
The aim of this study was to determine if two dimensions of song, the phonological part of lyrics and the melodic part of tunes, are processed in an independent or integrated way. In a series of five experiments, musically untrained participants classified bi-syllabic nonwords sung on two-tone melodic intervals. Their response had to be based on pitch contour, on nonword identity, or on the combination of pitch and nonword. When participants had to ignore irrelevant variations of the non-attended dimension, patterns of interference and facilitation allowed us to specify the processing interactions between dimensions. Results showed that consonants are processed more independently from melodic information than vowels are (Experiments 1-4). This difference between consonants and vowels was neither related to the sonority of the phoneme (Experiment 3), nor to the acoustical correlates between vowel quality and pitch height (Experiment 5). The implication of these results for our understanding of the functional relationships between musical and linguistic systems is discussed in light of the different evolutionary origins and linguistic functions of consonants and vowels.  相似文献   

11.
Rhythm constancy was investigated in two experiments. In Experiment 1, the first rhythm was presented at one tempo, the second rhythm was presented at a different tempo, and subjects judged whether the relative timing structures were identical (i.e., was the first rhythm merely sped up or slowed down to generate the second rhythm?). For the nonmetric rhythms used here, subjects perceived the rhythm in terms of the figural grouping of the tones, and rhythm constancy broke down between slower and faster tempos. In Experiment 2, the first rhythm was presented in tones of one duration; the second rhythm was presented in tones of a different duration; and subjects judged whether the timing structures of the tone onsets were identical (the two rhythms were presented at the same tempo). These results indicated a high degree of constancy; subjects found it easy to discriminate the timing structures. These results confirm that the onset timing is critical to rhythm perception and suggest that rhythm perception at slower rates (2 elements/sec) differs from rhythm perception at faster rates (3–4 elements/sec).  相似文献   

12.
This study examined the relationship between preferences for learning modality and the learning and short-term retention of musical rhythm patterns. 55 third graders completed the Swassing Barbe Modality Index. These students were also presented two-measure rhythm patterns through their visual, auditory, and kinesthetic modalities. Analysis indicated that children who preferred on the modality index one modality over others tended to prefer that same modality when learning simple musical rhythms.  相似文献   

13.
Two experiments were performed to examine musicians' and nonmusicians' electroencephalographic (EEG) responses to changes in major dimensions (tempo, melody, and key) of classical music. In Exp. 1, 12 nonmusicians' and 12 musicians' EEGs during melody and tempo changes in classical music showed more alpha desynchronization in the left hemisphere (F3) for changes in tempo than in the right. For melody, the nonmusicians were more right-sided (F4) than left in activation, and musicians showed no left-right differences. In Exp. 2, 18 musicians' and 18 nonmusicians' EEG after a key change in classical music showed that distant key changes elicited more right frontal (F4) alpha desynchronization than left. Musicians showed more reaction to key changes than nonmusicians and instructions to attend to key changes had no significant effect. Classical music, given its well-defined structure, offers a unique set of stimuli to study the brain. Results support the concept of hierarchical modularity in music processing that may be automatic.  相似文献   

14.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

15.
Time-shared tasks may conceivably be separable or integral. A case in which the question of separability seems quite relevant is dual-axis tracking. To test the interaction between tracking dimensions, we first studied whether they interfere with each other. Practiced subjects performed tracking on one or two axes, with or without feedback indicators and with or without a requirement to allocate resources unevenly between axes. They also performed with or without a concurrent binary classification of visually presented digits which were presented within a moving square that served as the target for tracking. Small deficits were found in the performance of both tracking and digit classification when performed together. However, the conditions of tracking did not have a discernible effect on either tracking or digit classification. Hence, the introduction of a second tracking axis probably does not have harmful consequences either on tracking itself or on any other task time-shared with tracking. Further studies were conducted to examine whether the absence of an effect of number of tracking axes is dues to their integrality. Ordinary position tracking was paired either with another similar task on the other axis or with a novel sort of tracking in which subjects had to continually match sizes of moving rectangles. Tasks were paired under both divided-attention and focused-attention instructions. No interference on position tracking was observed even when the types of task on the two axes differed, and no other evidence for integrality of the homogeneous task pairs was found.  相似文献   

16.
Melodic expectancies among children and adults were examined. In Experiment 1, adults, 11-year-olds, and 8-year-olds rated how well individual test tones continued fragments of melodies. In Experiment 2, 11-, 8-, and 5-year-olds sang continuations to 2-tone stimuli. Response patterns were analyzed using 2 models of melodic expectancy. Despite having fewer predictor variables, the 2-factor model (E. G. Schellenberg, 1997) equaled or surpassed the implication-realization model (E. Narmour, 1990) in predictive accuracy. Listeners of all ages expected the next tone in a melody to be proximate in pitch to the tone heard most recently. Older listeners also expected reversals of pitch direction, specifically for tones that changed direction after a disruption of proximity and for tones that formed symmetric patterns.  相似文献   

17.
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.  相似文献   

18.
In common sense experience based on introspection, consciousness is singular. There is only one ‘me’ and that is the one that is conscious. This means that ‘singularity’ is a defining aspect of ‘consciousness’. However, the three main theories of consciousness, Integrated Information, Global Workspace and Recurrent Processing theory, are generally not very clear on this issue. These theories have traditionally relied heavily on neuropsychological observations and have interpreted various disorders, such as anosognosia, neglect and split-brain as impairments in conscious awareness without any reference to ‘the singularity’. In this review, we will re-examine the theoretical implications of these impairments in conscious awareness and propose a new way how to conceptualize consciousness of singularity. We will argue that the subjective feeling of singularity can coexist with several disunified conscious experiences. Singularity awareness may only come into existence due to environmental response constraints. That is, perceptual, language, memory, attentional and motor processes may largely proceed unintegrated in parallel, whereas a sense of unity only arises when organisms need to respond coherently constrained by the affordances of the environment. Next, we examine from this perspective psychiatric disorders and psycho-active drugs. Finally, we present a first attempt to test this hypothesis with a resting state imaging experiment in a split-brain patient. The results suggest that there is substantial coherence of activation across the two hemispheres. These data show that a complete lesioning of the corpus callosum does not, in general, alter the resting state networks of the brain. Thus, we propose that we have separate systems in the brain that generate distributed conscious. The sense of singularity, the experience of a ‘Me-ness’, emerges in the interaction between the world and response-planning systems, and this leads to coherent activation in the different functional networks across the cortex.  相似文献   

19.
20.
Human newborns discriminate languages from different rhythmic classes, fail to discriminate languages from the same rhythmic class, and fail to discriminate languages when the utterances are played backwards. Recent evidence showing that cotton-top tamarins discriminate Dutch from Japanese, but not when utterances are played backwards, is compatible with the hypothesis that rhythm discrimination is based on a general perceptual mechanism inherited from a primate ancestor. The present study further explores the rhythm hypothesis for language discrimination by testing languages from the same and different rhythmic class. We find that tamarins discriminate Polish from Japanese (different rhythmic classes), fail to discriminate English and Dutch (same rhythmic class), and fail to discriminate backwards utterances from different and same rhythmic classes. These results provide further evidence that language discrimination in tamarins is facilitated by rhythmic differences between languages, and suggest that, in humans, this mechanism is unlikely to have evolved specifically for language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号