首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
何昊  张卫东 《心理科学进展》2016,24(8):1175-1184
唱音障碍是“五音不全”的一种表现形式。对唱音障碍的判定依赖于具体的评估方式、测试任务及测量指标。目前来看, 采用多任务、多指标以及相对标准是较为合理的办法。发生于歌唱过程(知觉、感觉运动转换、发声运动控制、记忆)中任一加工阶段的功能缺陷都可能导致唱音障碍, 但感觉运动转换缺陷被认为是唱音障碍的主要成因。最新的MMIA模型在内部模型的基础上描述了感觉运动转换缺陷的具体机制。未来研究一方面应进一步检验并完善当前的MMIA模型, 厘清唱音障碍的认知神经机制, 另一方面应利用现有成果为个体改善和提高自身的音准提供实质性的帮助。  相似文献   

2.
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.  相似文献   

3.
We report an experiment that tested whether effects of altered auditory feedback (AAF) during piano performance differ from its effects during singing. These effector systems differ with respect to the mapping between motor gestures and pitch content of auditory feedback. Whereas this action-effect mapping is highly reliable during phonation in any vocal motor task (singing or speaking), mapping between finger movements and pitch occurs only in limited situations, such as piano playing. Effects of AAF in both tasks replicated results previously found for keyboard performance (Pfordresher, 2003), in that asynchronous (delayed) feedback slowed timing whereas alterations to feedback pitch increased error rates, and the effect of asynchronous feedback was similar in magnitude across tasks. However, manipulations of feedback pitch had larger effects on singing than on keyboard production, suggesting effector-specific differences in sensitivity to action-effect mapping with respect to feedback content. These results support the view that disruption from AAF is based on abstract, effector independent, response-effect associations but that the strength of associations differs across effector systems.  相似文献   

4.
Mouse ultrasonic vocalizations (USVs) are often used as behavioral readouts of internal states, to measure effects of social and pharmacological manipulations, and for behavioral phenotyping of mouse models for neuropsychiatric and neurodegenerative disorders. However, little is known about the neurobiological mechanisms of rodent USV production. Here we discuss the available data to assess whether male mouse song behavior and the supporting brain circuits resemble those of known vocal non-learning or vocal learning species. Recent neurobiology studies have demonstrated that the mouse USV brain system includes motor cortex and striatal regions, and that the vocal motor cortex sends a direct sparse projection to the brainstem vocal motor nucleus ambiguous, a projection previously thought be unique to humans among mammals. Recent behavioral studies have reported opposing conclusions on mouse vocal plasticity, including vocal ontogeny changes in USVs over early development that might not be explained by innate maturation processes, evidence for and against a role for auditory feedback in developing and maintaining normal mouse USVs, and evidence for and against limited vocal imitation of song pitch. To reconcile these findings, we suggest that the trait of vocal learning may not be dichotomous but encompass a broad spectrum of behavioral and neural traits we call the continuum hypothesis, and that mice possess some of the traits associated with a capacity for limited vocal learning.  相似文献   

5.
近年来听觉表象开始得到关注,相关研究包括言语声音、音乐声音、环境声音的听觉表象三类。本文梳理了认知神经科学领域对上述三种听觉表象所激活的脑区研究,比较了听觉表象和听觉对应脑区的异同,并展望了听觉表象未来的研究方向。  相似文献   

6.
This study investigates vocal imitation of prosodic contour in ongoing spontaneous interaction with 10- to 13-week-old infants. Audio recordings from naturalistic interactions between 20 mothers and infants were analyzed using a vocalization coding system that extracted the pitch and duration of individual vocalizations. Using these data, the authors categorized a sample of 1,359 vocalizations on the basis of 7 predetermined contours. Pairs of identical successive vocalizations were considered to be imitations if they involved both partners or repetitions if they were produced by the same partner. Results show that not only do mothers and infants imitate and repeat prosodic contour types in the course of vocal interaction but they do so selectively. Indeed, different contours are imitated and repeated by each partner. These findings suggest that imitation and repetition of prosodic contours have specific functions for communication and vocal development in the 3rd month of life.  相似文献   

7.
Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.  相似文献   

8.
Vocal learning is the modification of vocal output by reference to auditory information. It allows for the imitation and improvisation of sounds that otherwise would not occur. The emergence of this skill may have been a primary step in the evolution of human language, but vocal learning is not unique to humans. It also occurs in songbirds, where its biology can be studied with greater ease. What follows is a review of some of the salient anatomical, developmental, and behavioral features of vocal learning, alongside parallels and differences between vocal learning in songbirds and humans.  相似文献   

9.
We examined the pitch and temporal acuity of auditory expectations/images formed under attentional-cuing and imagery task conditions, in order to address whether auditory expectations and auditory images are functionally equivalent. Across three experiments, we observed that pitch acuity was comparable between the two task conditions, whereas temporal acuity deteriorated in the imagery task. A fourth experiment indicated that the observed pitch acuity could not be attributed to implicit influences of the primed context alone. Across the experiments, image acuity in both pitch and time was better in listeners with more musical training. The results support a view that auditory images are multifaceted and that their acuity along any given dimension depends partially on the context in which they are formed.  相似文献   

10.
The research on imitation in the animal kingdom has more than a century-long history. A specific kind of imitation, auditory–vocal imitation, is well known in birds, especially among songbirds and parrots, but data for mammals are limited to elephants, marine mammals, and humans. Cetaceans are reported to imitate various signals, including species–specific calls, artificial sounds, and even vocalizations from other species if they share the same habitat. Here we describe the changes in the vocal repertoire of a beluga whale that was housed with a group of bottlenose dolphins. Two months after the beluga’s introduction into a new facility, we found that it began to imitate whistles of the dolphins, whereas one type of its own calls seemed to disappear. The case reported here may be considered as an interesting phenomenon of vocal accommodation to new social companions and cross-species socialization in cetaceans.  相似文献   

11.
This study investigated the mental representation of music notation. Notational audiation is the ability to internally "hear" the music one is reading before physically hearing it performed on an instrument. In earlier studies, the authors claimed that this process engages music imagery contingent on subvocal silent singing. This study refines the previously developed embedded melody task and further explores the phonatory nature of notational audiation with throat-audio and larynx-electromyography measurement. Experiment 1 corroborates previous findings and confirms that notational audiation is a process engaging kinesthetic-like covert excitation of the vocal folds linked to phonatory resources. Experiment 2 explores whether covert rehearsal with the mind's voice also involves actual motor processing systems and suggests that the mental representation of music notation cues manual motor imagery. Experiment 3 verifies findings of both Experiments 1 and 2 with a sample of professional drummers. The study points to the profound reliance on phonatory and manual motor processing--a dual-route stratagem--used during music reading. Further implications concern the integration of auditory and motor imagery in the brain and cross-modal encoding of a unisensory input.  相似文献   

12.
Maternal vocal imitation of infant vocalizations is highly prevalent during face-to-face interactions of infants and their caregivers. Although maternal vocal imitation has been associated with later verbal development, its potentially reinforcing effect on infant vocalizations has not been explored experimentally. This study examined the reinforcing effect of maternal vocal imitation of infant vocalizations using a reversal probe BAB design. Eleven 3- to 8-month-old infants at high risk for developmental delays experienced contingent maternal vocal imitation during reinforcement conditions. Differential reinforcement of other behavior served as the control condition. The behavior of 10 infants showed evidence of a reinforcement effect. Results indicated that vocal imitations can serve to reinforce early infant vocalizations.  相似文献   

13.
Singing is a cultural universal and an important part of modern society, yet many people fail to sing in tune. Many possible causes have been posited to explain poor singing abilities; foremost among these are poor perceptual ability, poor motor control, and sensorimotor mapping errors. To help discriminate between these causes of poor singing, we conducted 5 experiments testing musicians and nonmusicians in pitch matching and judgment tasks. Experiment 1 introduces a new instrument called a slider, on which participants can match pitches without using their voice. Pitch matching on the slider can be directly compared with vocal pitch matching, and results showed that both musicians and nonmusicians were more accurate using the slider than their voices to match target pitches, arguing against a perceptual explanation of singing deficits. Experiment 2 added a self-matching condition and showed that nonmusicians were better at matching their own voice than a synthesized voice timbre, but were still not as accurate as on the slider. This suggests a timbral translation type of mapping error. Experiments 3 and 4 demonstrated that singers do not improve over multiple sung responses, or with the aid of a visual representation of pitch. Experiment 5 showed that listeners were more accurate at perceiving the pitch of the synthesized tones than actual voice tones. The pattern of results across experiments demonstrates multiple possible causes of poor singing, and attributes most of the problem to poor motor control and timbral-translation errors, rather than a purely perceptual deficit, as other studies have suggested.  相似文献   

14.
Individuals differ markedly with respect to how well they can imitate pitch through singing and in their ability to perceive pitch differences. We explored whether the use of pitch in one’s native language can account for some of the differences in these abilities. Results from two studies suggest that individuals whose native language is a tone language, in which pitch contributes to word meaning, are better able to imitate (through singing) and perceptually discriminate musical pitch. These findings support the view that language acquisition fine-tunes the processing of critical auditory dimensions in the speech signal and that this fine-tuning can be carried over into nonlinguistic domains.  相似文献   

15.
The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.  相似文献   

16.
Eighteen musicians with absolute pitch (AP) confirmed by screening tests participated in tonal and verbal short-term-retention tasks. In the tonal task, subjects identified three successive piano tones by their letter names. Recall of these note names after 18 sec of counting backwards was near perfect. Recall after an 18-sec delay filled with random piano tones was also near perfect. In contrast, the same subjects demonstrated significant forgetting when required to retain letter trigrams while counting backwards for 18 sec. These results were essentially replicated in a second experiment using longer (27 sec) retention intervals, a more demanding verbal interference task, and an active musical interference task (singing a descending scale). We interpret these results as indicating that retention of note names by possessors of AP is not limited to verbal encoding; rather, multiple codes (e.g., auditory, kinesthetic, and visual imagery) are probably used.  相似文献   

17.
Social skills deficits are commonly reported among children with social phobia (SP) and children with Asperger’s Disorder (AD); however, a lack of direct comparison makes it unclear whether these groups, both of which endorse the presence of social anxiety, have similar or unique skills deficits. In this investigation, the social behaviors of children with SP (n = 30) or AD (n = 30) were compared to a typically developing (TD) peer group (n = 30) during structured role play interactions. Data were analyzed using blinded observers’ ratings of overt behaviors and digital vocal analysis of verbal communication. Compared to children with AD and TD children, children with SP exhibited less overall social skill, an ineffective ability to manage the conversational topic (pragmatic social behavior), and deficient speech production (speech and prosodic social behavior). There were no differences in observer ratings between children with AD and TD children. However, using digital analysis of vocal characteristics (i.e., intensity, pitch), distinct vocal patterns emerged. Specifically, children with AD spoke more softly than TD children, and had lower vocal pitch and less vocal pitch variability than children with SP. This pattern may be subjectively heard as monotonic speech. Consistent with a vocal pattern associated with heightened anxiety, children with SP spoke more softly and had less voice volume variation than TD children, and had higher vocal pitch and more vocal pitch variability (jitteriness) than children with AD. Clinical implications of these findings are discussed.  相似文献   

18.
Members of the KE family who suffer from an inherited developmental speech-and-language disorder and normal, age-matched, controls were tested on musical abilities, including perception and production of pitch and rhythm. Affected family members were not deficient in either the perception or production of pitch, whether this involved either single notes or familiar melodies. However, they were deficient in both the perception and production of rhythm in both vocal and manual modalities. It is concluded that intonation abilities are not impaired in the affected family members, whereas their timing abilities are impaired. Neither their linguistic nor oral praxic deficits can be at the root of their impairment in timing; rather, the reverse may be true.  相似文献   

19.
In four experiments, we examined whether facial expressions used while singing carry musical information that can be “read” by viewers. In Experiment 1, participants saw silent video recordings of sung melodic intervals and judged the size of the interval they imagined the performers to be singing. Participants discriminated interval sizes on the basis of facial expression and discriminated large from small intervals when only head movements were visible. Experiments 2 and 3 confirmed that facial expressions influenced judgments even when the auditory signal was available. When matched with the facial expressions used to perform a large interval, audio recordings of sung intervals were judged as being larger than when matched with the facial expressions used to perform a small interval. The effect was not diminished when a secondary task was introduced, suggesting that audio-visual integration is not dependent on attention. Experiment 4 confirmed that the secondary task reduced participants’ ability to make judgments that require conscious attention. The results provide the first evidence that facial expressions influence perceived pitch relations.  相似文献   

20.
The present study investigated whether the neural correlates for auditory feedback control of vocal pitch can be shaped by tone language experience. Event-related potentials (P2/N1) were recorded from adult native speakers of Mandarin and Cantonese who heard their voice auditory feedback shifted in pitch by −50, −100, −200, or −500 cents when they sustained the vowel sound /u/. Cantonese speakers produced larger P2 amplitudes to −200 or −500 cents stimuli than Mandarin speakers, but this language effect failed to reach significance in the case of −50 or −100 cents. Moreover, Mandarin speakers produced shorter N1 latencies over the left hemisphere than the right hemisphere, whereas Cantonese speakers did not. These findings demonstrate that neural processing of auditory pitch feedback in vocal motor control is subject to language-dependent neural plasticity, suggesting that cortical mechanisms of auditory-vocal integration can be shaped by tone language experience.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号