首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We have found that mildly mentally retarded adults are impaired in their perception of global stereoscopic forms (Fox & Oross, 1988) in ways that cannot be attributed to peripheral visual deficits or failures to comprehend. To assess the generality of that result, we measured the ability of mentally retarded adults to perceive kinematographic forms. Mentally retarded and nonretarded adults were presented with a two-choice, forced-choice detection task requiring the location of a target's spatial position. The discriminability of the forms was varied by systematic reductions in both element density and temporal correlation. We found that, relative to nonretarded adults, mentally retarded adults exhibited large qualitative deficits in their ability to discriminate these kinematographic forms when either density or correlation was reduced. After considering a number of alternative interpretations of these data based on factors such as peripheral visual impairment and a failure to attend, we could find none more compelling than a perceptual interpretation, which posits a deficit within the short-range motion system.  相似文献   

2.
王沛  张蓝心 《心理科学》2013,36(5):1078-1084
音乐和语言加工神经基础的关系研究,近年来发展迅速,获得了越来越多的关注。“共享结构整合资源假说”主张音乐的句法加工和语言的句法加工具有较大程度的神经资源的共享。在听觉语言实验中反映句法违例的ERP ELAN与音乐句法违例引发的ERAN极为相似,唯一的区别仅在于它们的分布有所不同——ERAN就像是一个两半球对称的ELAN。而且ERAN的引发不受是否接受过音乐训练这一因素的影响,虽然音乐人被试引发的ERAN波幅更大。一些研究发现音乐语义加工的神经基础为N400和N500。前者可以由音乐和语言两种刺激引发,后者只能由音乐意义的加工引发。然而,音乐的音调感知和语言的音调感知是否共享了神经资源,却还没有确定的结论。  相似文献   

3.
Summary This paper is concerned with the perception of small-scale timing changes in musical sequences. The control and expressive function of these have been studied quite extensively from a production perspective, but not much is known about listeners' ability to detect them. A pilot study and two experiments are reported which investigate the detectability of different amounts of timing change in different sequential positions, different pitch contexts, and against the background of both metronomic and expressive comparisons. The results show that listeners are able to perceive as little as 20 ms lengthening in the context of notes lasting between 100 and 400 ms, and that this threshold appears not to be a function of base duration in this range. Sequential position and pitch structure influence the detectability of timing changes to a limited extent, for which some possible explanations are offered. A case is made for regarding timing in music as both a medium to convey structure and an object in its own right, suggesting that it may be perceptually organized in two different ways — as the consequence of a structural interpretation and as a directly registered quantity.  相似文献   

4.
This study considered a relation between rhythm perception skills and individual differences in phonological awareness and grammar abilities, which are two language skills crucial for academic achievement. Twenty‐five typically developing 6‐year‐old children were given standardized assessments of rhythm perception, phonological awareness, morpho‐syntactic competence, and non‐verbal cognitive ability. Rhythm perception accounted for 48% of the variance in morpho‐syntactic competence after controlling for non‐verbal IQ, socioeconomic status, and prior musical activities. Children with higher phonological awareness scores were better able to discriminate complex rhythms than children with lower scores, but not after controlling for IQ. This study is the first to show a relation between rhythm perception skills and morpho‐syntactic production in children with typical language development. These findings extend the literature showing substantial overlap of neurocognitive resources for processing music and language. A video abstract of this article can be viewed at: http://youtu.be/_lO692qHDNg  相似文献   

5.
老龄化导致听觉系统、认知功能有所衰退。老年人群的言语理解能力减弱, 韵律信息解析存在困难。老年人对重音、语调、语速等语言韵律感知能力退化, 对情感韵律的加工也出现问题, 特别是消极情绪韵律加工减退较快。老年疾病进一步加深韵律加工难度, 韵律感知呈现出与特定疾病的相关性。未来研究需考察不同语言背景老年人群的韵律感知表现与机制、复杂交流环境的影响、韵律感知障碍对老年疾病的预测、韵律感知的早期干预与复健等问题。  相似文献   

6.
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners’ ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.  相似文献   

7.
The present study examined acoustic cue utilisation for perception of vocal emotions. Two sets of vocal-emotional stimuli were presented to 35 German and 30 American listeners: (1) sentences in German spoken with five different vocal emotions; and (2) systematically rate- or pitch-altered versions of the original emotional stimuli. In addition to response frequencies on emotional categories, activity ratings were obtained. For the systematically altered stimuli, slow rate was reliably associated with the “sad” label. In contrast, fast rate was classified as angry, frightened, or neutral. Manipulation of pitch variation was less potent than rate manipulation in influencing vocal emotional category choices. Reduced pitch variation was associated with perception as sad or neutral; greater pitch variation increased frightened, angry, and happy responses. Performance was highly similar for the two samples, although across tasks, German subjects perceived greater variability of activity in the emotional stimuli than did American participants.  相似文献   

8.
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding.  相似文献   

9.
Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.  相似文献   

10.
Young children learn multiple cognitive skills concurrently (e.g., language and music). Evidence is limited as to whether and how learning in one domain affects that in another during early development. Here we assessed whether exposure to a tone language benefits musical pitch processing among 3–5‐year‐old children. More specifically, we compared the pitch perception of Chinese children who spoke a tone language (i.e., Mandarin) with English‐speaking American children. We found that Mandarin‐speaking children were more advanced at pitch processing than English‐speaking children but both groups performed similarly on a control music task (timbre discrimination). The findings support the Pitch Generalization Hypothesis that tone languages drive attention to pitch in nonlinguistic contexts, and suggest that language learning benefits aspects of music perception in early development. A video abstract of this article can be viewed at: https://youtu.be/UY0kpGpPNA0  相似文献   

11.
Much of the comparative research on stimulus overselectivity has been flawed by either failure to control for chronological age and language ability of the subjects or reliance on the controversial technique of matching on mental age. The present study investigated the prevalence of overselectivity in autistic, trainable mentally retarded, and non-handicapped children demonstrating some expressive speech. The ages of the children were between 6 years-6 months and 9 years-3 months. Thus, chronological age and language ability were controlled, rather than allowed to vary unsystematically. Results indicated no significant differences between the autistic and TMR samples, but significant differences between the handicapped samples and the non-handicapped group. Some, but not all, of the handicapped children displayed overselectivity.  相似文献   

12.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

13.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

14.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

15.
We investigated whether musical competence was associated with the perception of foreign-language phonemes. The sample comprised adult native-speakers of English who varied in music training. The measures included tests of general cognitive abilities, melody and rhythm perception, and the perception of consonantal contrasts that were phonemic in Zulu but not in English. Music training was associated positively with performance on the tests of melody and rhythm perception, but not with performance on the phoneme-perception task. In other words, we found no evidence for transfer of music training to foreign-language speech perception. Rhythm perception was not associated with the perception of Zulu clicks, but such an association was evident when the phonemes sounded more similar to English consonants. Moreover, it persisted after controlling for general cognitive abilities and music training. By contrast, there was no association between melody perception and phoneme perception. The findings are consistent with proposals that music- and speech-perception rely on similar mechanisms of auditory temporal processing, and that this overlap is independent of general cognitive functioning. They provide no support, however, for the idea that music training improves speech perception.  相似文献   

16.
音高是音乐和言语领域中一个重要维度。失歌症是一种对音乐音高加工的障碍。探讨失歌症者对音乐和言语音高的加工有助于揭示音乐和言语音高加工是否共享特定的认知和神经机制。已有研究结果表明, 失歌症者对音乐音高加工存在障碍, 这种音高障碍在一定程度上影响到言语音高加工。同时, 声调语言背景无法弥补失歌症者的音高障碍。这些研究结果支持了资源-共享框架(resource-sharing framework), 即音乐和语言共享特定的认知和神经机制(Patel, 2003, 2008, in press), 并可能在一定程度上为失语症临床治疗提供借鉴。  相似文献   

17.
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.  相似文献   

18.
This review examines the cerebral control of musical behaviors. In clinical populations, impairment of related musical and linguistic functions, such as reading, writing, articulation, time sense, and prosody, implies the likely role of the language hemisphere in music. Similarly, for both clinical and normal populations, an investigation of mental abilities common to music and language points to left hemisphere control for certain aspects of temporal order, duration, simultaneity, rhythm, effector motor control, and categorical perception. While clinical studies have revealed deficits in various kinds of music capabilities with both left and right cerebral lesions, normal subjects similarly demonstrate varying degrees of asymmetry for components of music emphasizing pitch, harmony, timbre, intensity, and rhythm. Since differential laterality effects are apparent as a function of subjects' training or adopted strategies, the way musical information is processed may be an important determinant of hemispheric mediation. One hemisphere should not be regarded as “dominant” for music, but rather each interacts with the other, operating according to its own specialization.  相似文献   

19.
Phillips-Silver and Trainor (Phillips-Silver, J., Trainor, L.J., (2005). Feeling the beat: movement influences infants' rhythm perception. Science, 308, 1430) demonstrated an early cross-modal interaction between body movement and auditory encoding of musical rhythm in infants. Here we show that the way adults move their bodies to music influences their auditory perception of the rhythm structure. We trained adults, while listening to an ambiguous rhythm with no accented beats, to bounce by bending their knees to interpret the rhythm either as a march or as a waltz. At test, adults identified as similar an auditory version of the rhythm pattern with accented strong beats that matched their previous bouncing experience in comparison with a version whose accents did not match. In subsequent experiments we showed that this effect does not depend on visual information, but that movement of the body is critical. Parallel results from adults and infants suggest that the movement-sound interaction develops early and is fundamental to music processing throughout life.  相似文献   

20.
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号