首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The role of the right hemisphere in the production of linguistic stress   总被引:2,自引:1,他引:1  
Recent research has proposed a general prosodic disturbance associated with right hemisphere damage (RHD), one encompassing both affective and linguistic functions. The present study attempted to explore whether the ability to produce linguistic prosody was impaired in this patient population. Productions of phonemic stress tokens (e.g., Re'dcoat vs. red coa't) as well as examples of contrastive stress, or sentential emphasis (e.g., Sam hated the movie), were elicited from eight male speakers with unilateral right hemisphere CVAs and seven male control subjects. Two types of analyses were conducted on these utterances. Acoustic analysis focused on the correlates associated with word stress, namely changes in amplitude, duration, and fundamental frequency. The perceptual saliency of emerging cues to stress was also examined by presentation of test tokens to phonetically trained listeners for identification of stress placement. The patients as a group produced fewer acoustic cues to stress compared to the normal subjects, but no statistical differences were found between groups for either stress at the phrase level or at the sentence level. In the perceptual analysis, stress produced by the patient group was judged to be less salient than that for the normal group, although a high degree of variability was evident in both populations. The data suggest a spared processing mechanism for linguistic prosody in RHD speakers, thus mitigating against the view of a general dysprosody tied to RHD.  相似文献   

2.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

3.
We investigated how naively produced prosody affects listeners' end interpretations of ambiguous utterances. Non-professional speakers who were unaware of any ambiguity produced ambiguous sentences couched in short, unambiguous passages. In a forced-choice task, listeners could not tell which context the isolated ambiguous sentences came from (Exp. 1). However, listeners were able to correctly paraphrase the least ambiguous subset of these utterances, showing that prosody can be used to resolve ambiguity (Exp. 2). Nonetheless, in everyday language use, both prosody and context are available to interpret speech. When the least ambiguous sentences were cross-spliced into contexts biasing towards their original interpretations or into contexts biasing towards their alternative interpretations, answers to content questions about the ambiguous sentence, confidence ratings, and ratings of naturalness all indicated that prosody is ignored when context is available (Exp. 3). Although listeners can use prosody to interpret ambiguous sentences, they generally do not, and this makes sense in light of the frequent lack of reliable prosodic cues in everyday speech. Received: 3 April 1998 / Accepted: 21 October 1998  相似文献   

4.
The authors' hypotheses were that (a) listeners regard speakers whose global speech rates they judge to be similar to their own as more competent and more socially attractive than speakers whose rates are different from their own and (b) gender influences those perceptions. Participants were 17 male and 28 female listeners; they judged each of 3 male and 3 female speakers in terms of 10 unipolar adjective scales. The authors used 8 of the scales to derive 2 scores describing the extent to which the listener viewed a speaker as competent and socially attractive. The 2 scores were related by trend analyses (a) to the listeners' perceptions of the speakers' speech rates as compared with their own and (b) to comparisons of the actual speech rates of the speakers and listeners. The authors examined trend components of the data by split-plot multiple regression analyses. In general, the results supported both hypotheses. The participants judged speakers with speech rates similar to their own as more competent and socially attractive than speakers with speech rates slower or faster than their own. However, the ratings of competence were significantly influenced by the gender of the listeners, and those of social attractiveness were influenced by the gender of the listeners and the speakers.  相似文献   

5.
6.
Abstract

The authors' hypotheses were that (a) listeners regard speakers whose global speech rates they judge to be similar to their own as more competent and more socially attractive than speakers whose rates are different from their own and (b) gender influences those perceptions. Participants were 17 male and 28 female listeners; they judged each of 3 male and 3 female speakers in terms of 10 unipolar adjective scales. The authors used 8 of the scales to derive 2 scores describing the extent to which the listener viewed a speaker as competent and socially attractive. The 2 scores were related by trend analyses (a) to the listeners' perceptions of the speakers' speech rates as compared with their own and (b) to comparisons of the actual speech rates of the speakers and listeners. The authors examined trend components of the data by split-plot multiple regression analyses. In general, the results supported both hypotheses. The participants judged speakers with speech rates similar to their own as more competent and socially attractive than speakers with speech rates slower or faster than their own. However, the ratings of competence were significantly influenced by the gender of the listeners, and those of social attractiveness were influenced by the gender of the listeners and the speakers.  相似文献   

7.
In order to recognize banter or sarcasm in social interactions, listeners must integrate verbal and vocal emotional expressions. Here, we investigated event-related potential correlates of this integration in Asian listeners. We presented emotional words spoken with congruous or incongruous emotional prosody. When listeners classified word meaning as positive or negative and ignored prosody, incongruous trials elicited a larger late positivity than congruous trials in women but not in men. Sex differences were absent when listeners evaluated the congruence between word meaning and emotional prosody. The similarity of these results to those obtained in Western listeners suggests that sex differences in emotional speech processing depend on attentional focus and may reflect culturally independent mechanisms.  相似文献   

8.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

9.
Cvejic E  Kim J  Davis C 《Cognition》2012,122(3):442-453
Prosody can be expressed not only by modification to the timing, stress and intonation of auditory speech but also by modifying visual speech. Studies have shown that the production of visual cues to prosody is highly variable (both within and across speakers), however behavioural studies have shown that perceivers can effectively use such visual cues. The latter result suggests that people are sensitive to the type of prosody expressed despite cue variability. The current study investigated the extent to which perceivers can match visual cues to prosody from different speakers and from different face regions. Participants were presented two pairs of sentences (consisting of the same segmental content) and were required to decide which pair had the same prosody. Experiment 1 tested visual and auditory cues from the same speaker and Experiment 2 from different speakers. Experiment 3 used visual cues from the upper and the lower face of the same talker and Experiment 4 from different speakers. The results showed that perceivers could accurately match prosody even when signals were produced by different speakers. Furthermore, perceivers were able to match the prosodic cues both within and across modalities regardless of the face area presented. This ability to match prosody from very different visual cues suggests that perceivers cope with variation in the production of visual prosody by flexibly mapping specific tokens to abstract prosodic types.  相似文献   

10.
Little is known about the underlying dimensions of impaired recognition of emotional prosody that is frequently observed in patients with Parkinson's disease (PD). Because patients with PD also suffer from working memory deficits and impaired time perception, the present study examined the contribution of (a) working memory (frontal executive functioning) and (b) processing of the acoustic parameter speech rate to the perception of emotional prosody in PD. Two acoustic parameters known to be important for emotional classifications (speech duration and pitch variability) were systematically varied in prosodic utterances. Twenty patients with PD and 16 healthy controls (matched for age, sex, and IQ) participated in the study. The findings imply that (1) working memory dysfunctions and perception of emotional prosody are not independent in PD, (2) PD and healthy control subjects perceived vocal emotions categorically along two acoustic manipulation continua, and (3) patients with PD show impairments in processing of speech rate information.  相似文献   

11.
黄贤军  张伟欣 《心理科学》2014,37(4):851-856
采用ERP技术分别考察了情绪判断和性别判断任务下情绪韵律的加工进程。结果显示:在175-275ms时间段,情绪韵律的加工受实验任务的调节,情绪判断任务下存在效价主效应及负性偏向,愤怒比高兴和中性诱发了更正的P2成分,而性别判断任务则无效价效应。在后期评价加工及反应准备阶段(400-800ms),两种任务下,愤怒都比高兴和中性诱发了更正的晚成分。上述结果说明,不同情绪韵律的识别存在不同的认知机制,并在一定程度上会受加工任务的调节。  相似文献   

12.
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language.  相似文献   

13.
Pell MD 《Brain and language》2006,96(2):221-234
Hemispheric contributions to the processing of emotional speech prosody were investigated by comparing adults with a focal lesion involving the right (n = 9) or left (n = 11) hemisphere and adults without brain damage (n = 12). Participants listened to semantically anomalous utterances in three conditions (discrimination, identification, and rating) which assessed their recognition of five prosodic emotions under the influence of different task- and response-selection demands. Findings revealed that right- and left-hemispheric lesions were associated with impaired comprehension of prosody, although possibly for distinct reasons: right-hemisphere compromise produced a more pervasive insensitivity to emotive features of prosodic stimuli, whereas left-hemisphere damage yielded greater difficulties interpreting prosodic representations as a code embedded with language content.  相似文献   

14.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   

15.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so.  相似文献   

16.
Background: Facial expressions, prosody, and speech content constitute channels by which information is exchanged. Little is known about the simultaneous and differential contribution of these channels to empathy when they provide emotionality or neutrality. Especially neutralised speech content has gained little attention with regards to influencing the perception of other emotional cues. Methods: Participants were presented with video clips of actors telling short-stories. One condition conveyed emotionality in all channels while the other conditions either provided neutral speech content, facial expression, or prosody, respectively. Participants judged the emotion and intensity presented, as well as their own emotional state and intensity. Skin conductance served as a physiological measure of emotional reactivity. Results: Neutralising channels significantly reduced empathic responses. Electrodermal recordings confirmed these findings. The differential effect of the communication channels on empathy prerequisites was that target emotion recognition of the other decreased mostly when the face was neutral, whereas decreased emotional responses attributed to the target emotion were especially present in neutral speech. Conclusion: Multichannel integration supports conscious and autonomous measures of empathy and emotional reactivity. Emotional facial expressions influence emotion recognition, whereas speech content is important for responding with an adequate own emotional state, possibly reflecting contextual emotion-appraisal.  相似文献   

17.
18.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

19.
The purpose of this investigation was to determine if the speech of “successfully therapeutized” stutterers and a group of partially treated stutterers was perceptually different from the speech of normal speakers when judged by unsophisticated listeners. Tape-recorded speech samples of treated stutterers were obtained from leading proponents of (1) Van Riperian, (2) metronome-conditioned speech retraining, (3) delayed auditory feedback, (4) operant conditioning, (5) precision fluency shaping, and (6) “holistic” therapy programs. Fluent speech samples from these groups of stutterers were paired with matched fluent samples of normal talkers and presented to a group of 20 unsophisticated judges. The judges were instructed to select from each paired speech sample presented to them the one produced by the stuttering subject. The results of the analyses showed that five of seven experimental groups were identified at levels significantly above chance. It can be concluded that the fluent speech of the partially and successfully treated stutterers was perceptibly different from the utterances of the normal speakers and that the perceptual disparity can be detected, even by unsophisticated listeners.  相似文献   

20.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号