首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

2.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

3.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

4.
Infant-directed maternal speech is an important component of infants’ linguistic input. However, speech from other speakers and speech directed to others constitute a large amount of the linguistic environment. What are the properties of infant-directed speech that differentiate it from other components of infants’ speech environment? To what extent should these other aspects be considered as part of the linguistic input? This review examines the characteristics of the speech input to preverbal infants, including phonological, morphological, and syntactic characteristics, specifically how these properties might support language development. While maternal, infant-directed speech is privileged in the input, other aspects of the environment, such as adult-directed speech, may also play a role. Furthermore, the input is variable in nature, dependent on the age and linguistic development of the infant, the social context, and the interaction between the infant and speakers in the environment.  相似文献   

5.
Successful mindreading entails both the ability to think about what others know or believe, and to use this knowledge to generate predictions about how mental states will influence behavior. While previous studies have demonstrated that young infants are sensitive to others’ mental states, there continues to be much debate concerning how to characterize early theory of mind abilities. In the current study, we asked whether 6-month-old infants appreciate the causal role that beliefs play in action. Specifically, we tested whether infants generate action predictions that are appropriate given an agent’s current belief. We exploited a novel, neural indication of action prediction: motor cortex activation as measured by sensorimotor alpha suppression, to ask whether infants would generate differential predictions depending on an agent’s belief. After first verifying our paradigm and measure with a group of adult participants, we found that when an agent had a false belief that a ball was in the box, motor activity indicated that infants predicted she would reach for the box, but when the agent had a false belief that a ball was not in the box, infants did not predict that she would act. In both cases, infants based their predictions on what the agent, rather than the infant, believed to be the case, suggesting that by 6 months of age, infants can exploit their sensitivity to other minds for action prediction.  相似文献   

6.
Two studies using novel extensions of the conditioned head-turning method examined contributions of rhythmic and distributional properties of syllable strings to 8-month-old infants' speech segmentation. The two techniques introduced exploit fundamental, but complementary, properties of representational units. The first involved assessment of discriminative response maintenance when simple training stimuli were embedded in more complex speech contexts; the second involved measurement of infants' latencies in detecting extraneous signals superimposed on speech stimuli. A complex pattern of results is predicted if infants succeed in grouping syllables into higher-order units. Across the two studies, the predicted pattern of results emerged, indicating that rhythmic properties of speech play an important role in guiding infants toward potential linguistically relevant units and simultaneously demonstrating that the techniques proposed here provide valid, converging measures of infants' auditory representational units.  相似文献   

7.
The aim of the present mixed cross-sectional and longitudinal study was to observe and describe some aspects of vocal imitation in natural mother-infant interaction. Specifically, maternal imitation of infant utterances was observed in relation to the imitative modeling, mirrored equivalence, and social guided learning models of infant speech development. Nine mother-infant dyads were audio-video recorded. Infants were recruited at different ages between 6 and 11 months and followed for 3 months, providing a quasi-longitudinal series of data from 6 through 14 months of age. It was observed that maternal imitation was more frequent than infant imitation even though vocal imitation was a rare maternal response. Importantly, mothers used a range of contingent and noncontingent vocal responses in interaction with their infants. Mothers responded to three-quarters of their infant's vocalizations, including speech-like and less mature vocalization types. The infants’ phonetic repertoire expanded with age. Overall, the findings are most consistent with the social guided learning approach. Infants rarely imitated their mothers, suggests a creative self-motivated learning mechanism that requires further investigation.  相似文献   

8.
Over the past years functional near-infrared spectroscopy (fNIRS) has substantially contributed to the understanding of language and its neural correlates. In contrast to other imaging techniques, fNIRS is well suited to study language function in healthy and psychiatric populations due to its cheap and easy application in a quiet and natural measurement setting. Its relative insensitivity for motion artifacts allows the use of overt speech tasks and the investigation of verbal conversation. The present review focuses on the numerous contributions of fNIRS to the field of language, its development, and related psychiatric disorders but also on its limitations and chances for the future.  相似文献   

9.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   

10.
Batchelder EO 《Cognition》2002,83(2):167-206
Prelinguistic infants must find a way to isolate meaningful chunks from the continuous streams of speech that they hear. BootLex, a new model which uses distributional cues to build a lexicon, demonstrates how much can be accomplished using this single source of information. This conceptually simple probabilistic algorithm achieves significant segmentation results on various kinds of language corpora - English, Japanese, and Spanish; child- and adult-directed speech, and written texts; and several variations in coding structure - and reveals which statistical characteristics of the input have an influence on segmentation performance. BootLex is then compared, quantitatively and qualitatively, with three other groups of computational models of the same infant segmentation process, paying particular attention to functional characteristics of the models and their similarity to human cognition. Commonalities and contrasts among the models are discussed, as well as their implications both for theories of the cognitive problem of segmentation itself, and for the general enterprise of computational cognitive modeling.  相似文献   

11.
Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).  相似文献   

12.
13.
Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.  相似文献   

14.
Adults and children 5, 8, and 11 years of age listened to short excerpts of unfamiliar music that sounded happy, scary, peaceful, or sad. Listeners initially rated how much they liked each excerpt. They subsequently made a forced-choice judgment about the emotion that each excerpt conveyed. Identification accuracy was higher for young girls than for young boys, but both genders reached adult-like levels by age 11. High-arousal emotions (happiness and fear) were better identified than low-arousal emotions (peacefulness and sadness), and this advantage was exaggerated among younger children. Whereas children of all ages preferred excerpts depicting high-arousal emotions, adults favored excerpts depicting positive emotions (happiness and peacefulness). A preference for positive emotions over negative emotions was also evident among females of all ages. As identification accuracy improved, liking for positively valenced music increased among 5- and 8-year-olds but decreased among 11-year-olds.  相似文献   

15.
16.
Existing evidence indicates that maternal responses to infant distress, specifically more sensitive and less inconsistent/rejecting responses, are associated with lower infant negative affect (NA). However, due to ethical and methodological constraints, most existing studies do not employ methods that guarantee each mother will be observed responding to infant distress. To address such limitations, in the current study, a distressed infant simulator (SIM), programmed to be inconsolable, was employed to ensure that mothers (N = 150; 4 months postpartum) were observed responding to infant distress. Subsequently, maternal report of infant NA and an early aspect of regulatory capacity, sootheability, were collected at eight-months postpartum, and observational assessments of infant fear and frustration, fine-grained aspects of NA, were collected at 12-months of age. After controlling for infant sex, the proportion of time mothers spent using soothing touch during the SIM task was related to less overall maternal reported NA and sadness at eight-months postpartum. Similarly, greater use of touch was associated with less fear reactivity, and greater maternal use of vocalizations was related to lower infant frustration, at 12-months postpartum. Specific maternal soothing behaviors were not related to infant soothability at 8 months postpartum. Total time spent interacting with the SIM was not related to infant temperament, suggesting that type of soothing, not quantity of interactions with distressed infants, is important for reducing infant NA. The implications of these findings and important future directions are discussed.  相似文献   

17.
Bradlow AR  Bent T 《Cognition》2008,106(2):707-729
This study investigated talker-dependent and talker-independent perceptual adaptation to foreign-accent English. Experiment 1 investigated talker-dependent adaptation by comparing native English listeners' recognition accuracy for Chinese-accented English across single and multiple talker presentation conditions. Results showed that the native listeners adapted to the foreign-accented speech over the course of the single talker presentation condition with some variation in the rate and extent of this adaptation depending on the baseline sentence intelligibility of the foreign-accented talker. Experiment 2 investigated talker-independent perceptual adaptation to Chinese-accented English by exposing native English listeners to Chinese-accented English and then testing their perception of English produced by a novel Chinese-accented talker. Results showed that, if exposed to multiple talkers of Chinese-accented English during training, native English listeners could achieve talker-independent adaptation to Chinese-accented English. Taken together, these findings provide evidence for highly flexible speech perception processes that can adapt to speech that deviates substantially from the pronunciation norms in the native talker community along multiple acoustic-phonetic dimensions.  相似文献   

18.
Recent findings have revealed that very preterm neonates already show the typical brain responses to place of articulation changes in stop consonants, but data on their sensitivity to other types of phonetic changes remain scarce. Here, we examined the impact of 7–8 weeks of extra‐uterine life on the automatic processing of syllables in 20 healthy moderate preterm infants (mean gestational age at birth 33 weeks) matched in maturational age with 20 full‐term neonates, thus differing in their previous auditory experience. This design allows elucidating the contribution of extra‐uterine auditory experience in the immature brain on the encoding of linguistically relevant speech features. Specifically, we collected brain responses to natural CV syllables differing in three dimensions using a multi‐feature mismatch paradigm, with the syllable/ba/ as the standard and three deviants: a pitch change, a vowel change to/bo/ and a consonant voice‐onset time (VOT) change to/pa/. No significant between‐group differences were found for pitch and consonant VOT deviants. However, moderate preterm infants showed attenuated responses to vowel deviants compared to full terms. These results suggest that moderate preterm infants' limited experience with low‐pass filtered speech prenatally can hinder vowel change detection and that exposure to natural speech after birth does not seem to contribute to improve this capacity. These data are in line with recent evidence suggesting a sequential development of a hierarchical functional architecture of speech processing that is highly sensitive to early auditory experience.  相似文献   

19.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

20.
Infants’ responsiveness to maternal speech and singing   总被引:1,自引:1,他引:0  
Infants who were 6 months of age were presented with extended audiovisual episodes of their mother's infant-directed speech or singing. Cumulative visual fixation and initial fixation of the mother's image were longer for maternal singing than for maternal speech. Moreover, movement reduction, which may signal intense engagement, accompanied visual fixation more frequently for maternal singing than for maternal speech. The stereotypy and repetitiveness of maternal singing may promote moderate arousal levels, which sustain infant attention, in contrast to the greater variability of speech, which may result in cycles of heightened arousal, gaze aversion, and re-engagement. The regular pulse of music may also enhance emotional coordination between mother and infant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号