首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
This study investigates vocal imitation of prosodic contour in ongoing spontaneous interaction with 10- to 13-week-old infants. Audio recordings from naturalistic interactions between 20 mothers and infants were analyzed using a vocalization coding system that extracted the pitch and duration of individual vocalizations. Using these data, the authors categorized a sample of 1,359 vocalizations on the basis of 7 predetermined contours. Pairs of identical successive vocalizations were considered to be imitations if they involved both partners or repetitions if they were produced by the same partner. Results show that not only do mothers and infants imitate and repeat prosodic contour types in the course of vocal interaction but they do so selectively. Indeed, different contours are imitated and repeated by each partner. These findings suggest that imitation and repetition of prosodic contours have specific functions for communication and vocal development in the 3rd month of life.  相似文献   

2.
Previous research has demonstrated that falling contours predominate in infant utterances as early as 3 months of age. The precocious appearance of falling intonation is usually attributed to “biological tendencies,” that is, the physiological naturalness of descending fundamental frequency patterns. In contrast, other investigations have shown that some children do not use adultlike falling or rising intonation contours until they produce their first words. To resolve these conflicting views of prosodic development, this study acoustically investigated intonation production in the monosyllabic utterances of 10 English-speaking children from 10 to 13 months of age and the utterance-final monosyllables of ten 4-year-olds. Children in both age groups produced a wider accent range in falling contours than in rising contours. Infants produced a narrower accent range than the preschoolers. The findings suggest that biological tendencies are not sufficient to account for children’s acquisition of intonation between the ages of 1 and 4 years.  相似文献   

3.
Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.  相似文献   

4.
Pell MD 《Brain and language》2006,96(2):221-234
Hemispheric contributions to the processing of emotional speech prosody were investigated by comparing adults with a focal lesion involving the right (n = 9) or left (n = 11) hemisphere and adults without brain damage (n = 12). Participants listened to semantically anomalous utterances in three conditions (discrimination, identification, and rating) which assessed their recognition of five prosodic emotions under the influence of different task- and response-selection demands. Findings revealed that right- and left-hemispheric lesions were associated with impaired comprehension of prosody, although possibly for distinct reasons: right-hemisphere compromise produced a more pervasive insensitivity to emotive features of prosodic stimuli, whereas left-hemisphere damage yielded greater difficulties interpreting prosodic representations as a code embedded with language content.  相似文献   

5.
Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.  相似文献   

6.
Differences in hemispheric functions underlying speech perception may be related to the size of temporal integration windows over which prosodic features (e.g., pitch) span in the speech signal. Chinese tone and intonation, both signaled by variations in pitch contours, span over shorter (local) and longer (global) temporal domains, respectively. This cross-linguistic (Chinese and English) study uses functional magnetic resonance imaging to show that pitch contours associated with tones are processed in the left hemisphere by Chinese listeners only, whereas pitch contours associated with intonation are processed predominantly in the right hemisphere. These findings argue against the view that all aspects of speech prosody are lateralized to the right hemisphere, and promote the idea that varying-sized temporal integration windows reflect a neurobiological adaptation to meet the 'prosodic needs' of a particular language.  相似文献   

7.
Older adults are not as good as younger adults at decoding prosodic emotions. We sought to determine the specificity of this finding. Performance of older and younger adults was compared on a prosodic emotion task, a "pure" prosodic emotion task, a linguistic prosody task, and a "pure" linguistic prosody task. Older adults were less accurate at interpreting prosodic emotion cues and nonemotional contours, concurrent semantic processing worsened interpretation, and performance was further degraded when identifying negative emotions and questions. Older adults display a pervasive problem interpreting prosodic cues, but further study is required to clarify the stage at which performance declines.  相似文献   

8.
韵律特征研究   总被引:1,自引:0,他引:1  
介绍从知觉、认知和语料库分析角度对汉语韵律特征进行的一系列研究。(1)韵律特征知觉:用实验心理学和知觉标注的语料库分析方法,研究汉语语调和音高下倾与降阶问题,语句和语篇中知觉可以区分的韵律层级及相关的声学线索。研究结果支持汉语语调的双线模型理论和语句音高下倾的存在;证明语篇中知觉可以区分的韵律边界是小句、句子和段落,及其知觉相关的声学线索。(2)韵律特征与其他语言学结构的关系:在标注的语料库的基础上,用常规统计方法研究语句常规重音分布规律、语篇信息结构与重音的关系、并用决策树方法研究根据文本信息确定韵律短语边界和焦点的规则。(3)韵律特征在语篇理解中的作用:用实验心理学方法和脑电指标研究韵律对语篇信息整合和指代理解的影响,揭示其作用的认知和神经机制。讨论了这些研究结果对语音工程、语音学理论和心理语言学研究的实践和理论意义  相似文献   

9.
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.  相似文献   

10.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

11.
Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (). We test this idea by presenting intonation contours for discrimination, both in the context of the original sentences in which they occur and in a "pure" form dissociated from any phonetic context. The pure form consists of gliding-pitch analogs of the original intonation contours which exactly follow their pattern of pitch and timing. If the spared intonation perception of tone-deaf individuals is due to the coarse pitch contrasts of intonation, then such individuals should discriminate the original sentences and the gliding-pitch analogs equally well. In contrast, we find that discrimination of the gliding-pitch analogs is severely degraded. Thus it appears that the dissociation between spoken and musical pitch perception in tone-deaf individuals is due to a deficit at a higher level than simple pitch-change detection.  相似文献   

12.
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross‐domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5‐ to 10‐year‐old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age‐related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender.  相似文献   

13.
《Brain and cognition》2006,60(3):310-313
Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (Peretz & Hyde, 2003). We test this idea by presenting intonation contours for discrimination, both in the context of the original sentences in which they occur and in a “pure” form dissociated from any phonetic context. The pure form consists of gliding-pitch analogs of the original intonation contours which exactly follow their pattern of pitch and timing. If the spared intonation perception of tone-deaf individuals is due to the coarse pitch contrasts of intonation, then such individuals should discriminate the original sentences and the gliding-pitch analogs equally well. In contrast, we find that discrimination of the gliding-pitch analogs is severely degraded. Thus it appears that the dissociation between spoken and musical pitch perception in tone-deaf individuals is due to a deficit at a higher level than simple pitch-change detection.  相似文献   

14.
Evidence from the literature indicates that dogs’ choices can be influenced by human-delivered social cues, such as pointing, and pointing combined with facial expression, intonation (i.e., rising and falling voice pitch), and/or words. The present study used an object choice task to investigate whether intonation conveys unique information in the absence of other salient cues. We removed facial expression cues and speech information by delivering cues with the experimenter’s back to the dog and by using nonword vocalizations. During each trial, the dog was presented with pairs of the following three vocal cues: Positive (happy-sounding), Negative (sad-sounding), and Breath (neutral control). In Experiment 1, where dogs received only these vocal cue pairings, dogs preferred the Positive intonation, and there was no difference in choice behavior between Negative or Breath. In Experiment 2, we included a point cue with one of the two vocal cues in each pairing. Here, dogs preferred containers receiving pointing cues as well as Negative intonation, and preference was greatest when both of these cues were presented together. Taken together, these findings indicate that dogs can indeed extract information from vocal intonation alone, and may use intonation as a social referencing cue. However, the effect of intonation on behavior appears to be strongly influenced by the presence of pointing, which is known to be a highly salient visual cue for dogs. It is possible that in the presence of a point cue, intonation may shift from informative to instructive.  相似文献   

15.
Research on early signs of autism in social interactions often focuses on infants’ motor behaviors; few studies have focused on speech characteristics. This study examines infant‐directed speech of mothers of infants later diagnosed with autism (LDA; n = 12) or of typically developing infants (TD; n = 11) as well as infants’ productions (13 LDA, 13 TD). Since LDA infants appear to behave differently in the first months of life, it can affect the functioning of dyadic interactions, especially the first vocal productions, sensitive to expressiveness and emotions sharing. We assumed that in the first 6 months of life, prosodic characteristics (mean duration, mean pitch, and intonative contour types) will be different in dyads with autism. We extracted infants’ and mothers’ vocal productions from family home movies and analyzed the mean duration and pitch as well as the pitch contours in interactive episodes. Results show that mothers of LDA infants use relatively shorter productions as compared to mothers talking to TD infants. LDA infants’ productions are not different in duration or pitch, but they use less complex modulated productions (i.e., those with more than two melodic modulations) than do TD. Further studies should focus on developmental profiles in the first year, analyzing prosody monthly.  相似文献   

16.
Prosody, a salient aspect of speech that includes rhythm and intonation, has been shown to help infants acquire some aspects of syntax. Recent studies have shown that birds of two vocal learning species are able to categorize human speech stimuli based on prosody. In the current study, we found that the non-vocal learning rat could also discriminate human speech stimuli based on prosody. Not only that, but rats were able to generalize to novel stimuli they had not been trained with, which suggests that they had not simply memorized the properties of individual stimuli, but learned a prosodic rule. When tested with stimuli with either one or three out of the four prosodic cues removed, the rats did poorly, suggesting that all cues were necessary for the rats to solve the task. This result is in contrast to results with humans and budgerigars, both of which had previously been studied using the same paradigm. Humans and budgerigars both learned the task and generalized to novel items, but were also able to solve the task with some of the cues removed. In conclusion, rats appear to have some of the perceptual abilities necessary to generalize prosodic patterns, in a similar though not identical way to the vocal learning species that have been studied.  相似文献   

17.
Pell MD 《Brain and cognition》2002,48(2-3):499-504
This report describes some preliminary attributes of stimuli developed for future evaluation of nonverbal emotion in neurological populations with acquired communication impairments. Facial and vocal exemplars of six target emotions were elicited from four male and four female encoders and then prejudged by 10 young decoders to establish the category membership of each item at an acceptable consensus level. Representative stimuli were then presented to 16 additional decoders to gather indices of how category membership and encoder gender influenced recognition accuracy of emotional meanings in each nonverbal channel. Initial findings pointed to greater facility in recognizing target emotions from facial than vocal stimuli overall and revealed significant accuracy differences among the six emotions in both the vocal and facial channels. The gender of the encoder portraying emotional expressions was also a significant factor in how well decoders recognized specific emotions (disgust, neutral), but only in the facial condition.  相似文献   

18.
Facial emotion-recognition difficulties have been reported in school-aged children with behavior problems; little is known, however, about either this association in preschool children or with regard to vocal emotion recognition. The current study explored the association between facial and vocal emotion recognition and behavior problems in a sample of 3 to 6-year-old children. A sample of 57 children enriched for risk of behavior problems (41 were recruited from the general population while 16 had been referred for behavior problems to local clinics) were each presented with a series of vocal and facial stimuli expressing different emotions (i.e., angry, happy, and sad) of low and high intensity. Parents rated children’s externalizing and internalizing behavior problems. Vocal and facial emotion recognition accuracy was negatively correlated with externalizing but not internalizing behavior problems independent of emotion type. The effects with the externalizing domain were independently associated with hyperactivity rather than conduct problems. The results highlight the importance of using vocal as well as facial stimuli when studying the relationship between emotion-recognition and behavior problems. Future studies should test the hypothesis that difficulties in responding to adult instructions and commands seen in children with attention deficit/hyperactivity disorder (ADHD) may be due to deficits in the processing of vocal emotions.  相似文献   

19.
Conducting a study of emotional prosody often requires that one have a valid set of stimuli for assessing perceived emotion in vocal intonation. In this study, we created a list of sentences with both affective and neutral content, and then validated them against rater opinion. Participants read sentences with content that implied happiness, sadness, anger, fear, or neutrality and rated how well they could imagine each sentence being expressed in each emotion. Coefficients of variation and intraclass correlations were calculated to narrow the list to affective sentences that had high agreement and neutral sentences that had low agreement. We found that raters could easily identify most emotional content and did not ascribe any unique emotion to most neutral content. We also found differences between the intensity of male and female ratings. The final list of sentences is available on the Internet (www.med.upenn.edu/bbl/) and can be recorded for use as stimuli for prosodic studies.  相似文献   

20.
老龄化导致听觉系统、认知功能有所衰退。老年人群的言语理解能力减弱, 韵律信息解析存在困难。老年人对重音、语调、语速等语言韵律感知能力退化, 对情感韵律的加工也出现问题, 特别是消极情绪韵律加工减退较快。老年疾病进一步加深韵律加工难度, 韵律感知呈现出与特定疾病的相关性。未来研究需考察不同语言背景老年人群的韵律感知表现与机制、复杂交流环境的影响、韵律感知障碍对老年疾病的预测、韵律感知的早期干预与复健等问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号