排序方式: 共有91条查询结果,搜索用时 0 毫秒
1.
On the lateralization of emotional prosody: an event-related functional MR investigation 总被引:11,自引:0,他引:11
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody. 相似文献
2.
Despite considerable speculation in the research literature regarding the complementarity of functional lateralization of prosodic and linguistic processes in the normal intact brain, few studies have directly addressed this issue. In the present study, behavioral laterality indices of emotional prosodic and traditional linguistic speech functions were obtained for a sample of healthy young adults, using the dichotic listening method. After screening for adequate emotional prosody and linguistic recognition abilities, participants completed the Fused Rhymed Words Test (FRWT; Wexler & Halwes, 1983) and the Dichotic Emotion Recognition Test (DERT; McNeely & Netley, 1998). Examination of the difference in ear asymmetries for these measures within individuals revealed a complementary pattern in 78% of the sample. However, the correlation between laterality quotients for the FRWT and DERT was near zero, supporting Bryden's model of "statistical" complementarity (e.g., Bryden, 1990). 相似文献
3.
Sixty-one single Japanese-speaking women between the ages of 18 and 26 years were recorded as they read aloud picture books to a young child and as they conversed with another Japanese-speaking woman. When their utterances were acoustically compared between the two settings with regard to prosodic features, both the average pitch and pitch excursions exhibited a significant increase when interacting with the child in 17 of the 61 women. In 36 of the remaining 44 subjects, neither of these parameters showed such changes. This individual variability was not related to the subjects' liking for picture books, previous experience with reading or being read them, or with baby-sitting. The only variable that could explain the results was whether the subjects had grown up with one or more siblings or as only children. If they were only children, the prosodic modification was significantly less likely to occur 相似文献
4.
5.
6.
Toshiaki Muramoto 《The Japanese psychological research》2014,56(3):275-287
This study explored the effects of comma insertion on the processing of garden path sentences in Japanese. In two experiments, participants read relative clause sentences containing two ambiguities: single versus relative clause and early‐opening (EO) versus late‐opening (LO) left clause boundaries. EO sentences were presented with or without a comma compatible with an EO boundary in Experiment 1 and, in Experiment 2, with an LO boundary. The results showed that the comma, whether compatible or incompatible with the correct clause boundary, decreased reading time for the relative clause's head noun, indicating that a comma helps readers avoid or recover from garden paths caused by relative clause structures. Conversely, a comma incompatible with a clause boundary increased processing costs of second ambiguity resolution (EO vs. LO). We concluded that punctuation affects processing of temporary ambiguity in Japanese as in languages with stricter punctuation rules; furthermore, readers depend strongly on punctuation for online processing of whole sentence structures. We also discuss the relationship between punctuation and (implicit) prosody. 相似文献
7.
8.
Oliver H. Turnbull Sara L. Rhys-Jones A. Lyn Jackson 《The Journal of genetic psychology》2013,174(2):178-186
Popular theory on the tendency to cradle an infant to the left side points to the specialization of the right hemisphere for the perception and expression of emotion. J. S. Sieratzki and B. Woll (1996) recently suggested that more emphasis be placed on the auditory modality, specifically focusing on the role of prosodic information. In this study, the direction of the lateral cradling bias in a group of profoundly deaf children, a group of deaf adults, and a control group of adults with no hearing impairment was investigated. The authors found a strong leftward cradling bias in all groups, a bias that was, if anything, stronger in the deaf participants. Given that people who are profoundly deaf, especially those who have been deaf from birth, have not been exposed to auditory prosody, the data do not support the suggestion that such prosodic information is the basis for the leftward cradling bias. 相似文献
9.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. 相似文献
10.
Talar M. Hopyan-Misakyan Karen A. Gordon Maureen Dennis Blake C. Papsin 《Child neuropsychology》2013,19(2):136-146
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody. 相似文献