首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Female singers were recorded singing a song in a high and/or a low range. Infants preferred to listen to the higher-pitched versions, suggesting that infants' preference for infant-directed singing and speech is mediated in part by a preference for higher pitch.  相似文献   

2.
In this study, we investigated the impact of congenital amusia, a disorder of musical processing, on speech and song imitation in speakers of a tone language, Mandarin. A group of 13 Mandarin-speaking individuals with congenital amusia and 13 matched controls were recorded while imitating a set of speech and two sets of song stimuli with varying pitch and rhythm patterns. The results indicated that individuals with congenital amusia were worse than controls in both speech and song imitation, in terms of both pitch matching (absolute and relative) and rhythm matching (relative time and number of time errors). Like the controls, individuals with congenital amusia achieved better absolute and relative pitch matching and made fewer pitch interval and contour errors in song than in speech imitation. These findings point toward domain-general pitch (and time) production deficits in congenital amusia, suggesting the presence of shared pitch production mechanisms but distinct requirements for pitch-matching accuracy in language and music processing.  相似文献   

3.
Infants attend more to infant-directed speech (IDS) than to adult-directed speech (ADS), but infants also prefer speech judged to be high in positive emotion over less emotional speech regardless of whether it is IDS or ADS. Emotion in voices is often conveyed by absolute pitch, pitch contours, and tempo (or duration). The purpose of our study was to explore how perceived emotion in speech is enhanced or attenuated by duration. We tested 18- and 32-week-old infants for attention to IDS that was either high or low in emotion (as judged by adults) and at two different durations (normal vs. slow). The results showed that 18-week-olds attended more to slow IDS (with affect constant), attended more to high affect (with duration constant), and showed equal attention when affect and duration were juxtaposed. In contrast, 32-week-olds showed greater attention to normal IDS regardless of its emotional level. Slower IDS may enhance younger infants' perception of vocal emotion but does not increase attention in older infants perhaps because they no longer rely on this acoustic cue for emotion. We suggest future studies to help tease apart these interpretations.  相似文献   

4.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

5.
Many studies have found that infant-directed (ID) speech has higher pitch, has more exaggerated pitch contours, has a larger pitch range, has a slower tempo, and is more rhythmic than typical adult-directed (AD) speech. We show that the ID speech style reflects free vocal expression of emotion to infants, in comparison with more inhibited expression of emotion in typical AD speech. When AD speech does express emotion, the same acoustic features are used as in ID speech. We recorded ID and AD samples of speech expressing love-comfort, fear, and surprise. The emotions were equally discriminable in the ID and AD samples. Acoustic analyses showed few differences between the ID and AD samples, but robust differences across the emotions. We conclude that ID prosody itself is not special. What is special is the widespread expression of emotion to infants in comparison with the more inhibited expression of emotion in typical adult interactions.  相似文献   

6.
In two experiments, we investigated context effects on tempo judgments for familiar and unfamiliar songs performed by popular artists. In Experiment 1, participants made comparative tempo judgments to a remembered standard for song clips drawn from either a slow or a fast context, created by manipulating the tempos of the same songs. Although both familiar and unfamiliar songs showed significant shifts in their points of subjective equality toward the tempo context values, more-familiar songs showed significantly reduced contextual bias. In Experiment 2, tempo pleasantness ratings showed significant context effects in which the ordering of tempos on the pleasantness scale differed across contexts, with the most pleasant tempo shifting toward the contextual values, an assimilation of ideal points. Once again, these effects were significant but reduced for the more-familiar songs. The moderating effects of song familiarity support a weak version of the absolute-tempo hypothesis, in which long-term memory for tempo reduces but does not eliminate contextual effects. Thus, although both relative and absolute tempo information appear to be encoded in memory, the absolute representation may be subject to rapid revision by recently experienced tempo-altered versions of the same song.  相似文献   

7.
The purpose of this study was to explore the relationship between mothers’ depressive symptoms and the acoustic parameters of infant-directed (ID) singing. Participants included 80 mothers and their 3- to 9-month-old infants. A digital recording was made of each mother's voice while singing to her infant. Extraction and analyses of vocal data revealed a main effect of tempo, meaning that as mothers reported more depressive symptoms, they tended to sing faster to their infants. Additionally, an interaction effect indicated that mothers with depressive symptoms were more likely to sing with tonal key clarity to their male infants. These findings suggest that as mothers experience depressive symptoms, their ID singing may lack the sensitivity and emotional expression that infants need for affect regulation. An intervention that combines interaction coaching and ID singing may help mothers with depressive symptoms to engage in sensitive and emotionally synchronized interactions with their infants.  相似文献   

8.
Parent's infant-directed vocalizations are highly dynamic and emotive compared to their adult-directed counterparts, and correspondingly, more effectively capture infants’ attention. Infant-directed singing is a specific type of vocalization that is common throughout the world. Parents tend to sing a small handful of songs in a stereotyped way, and a number of recent studies have highlighted the significance of familiar songs in young children's social behaviors and evaluations. To date, no studies have examined whether infants’ responses to familiar versus unfamiliar songs are modulated by singer identity (i.e., whether the singer is their own parent). In the present study, we investigated 9- to 12-month-old infants’ (N = 29) behavioral and electrodermal responses to relatively familiar and unfamiliar songs sung by either their own mother or another infant's mother. Familiar songs recruited more attention and rhythmic movement, and lower electrodermal levels relative to unfamiliar songs. Moreover, these responses were robust regardless of whether the singer was their mother or a stranger, even when the stranger's rendition differed greatly from their mothers’ in mean fundamental frequency and tempo. Results indicate that infants’ interest in familiar songs is not limited to idiosyncratic characteristics of their parents’ song renditions, and points to the potential for song as an effective early signifier of group membership.  相似文献   

9.
Seventy, 6-9-month-old infants were videotaped during six interactions: mother sings assigned song, "stranger" sings assigned song, mother sings song of choice, mother reads book, mother plays with toy, and mother and infant listen to recorded music. Infant-directed (ID) singing conditions elicited moderately positive cognitive behavior, low levels of positive physical behavior and minimal amounts of vocal behaviors, mostly negative. Across all conditions, cognitive scores remained positive at low to moderate levels. Physical responses were most positive during book and toy, most negative during recorded music, and differed by gender, especially during ID singing. Vocally, infants responded positively to toy, and 8-month-old infants vocalized more than younger infants, particularly during ID singing conditions. ID singing appears just as effective as book reading or toy play in sustaining infant attention and far more effective than listening to recorded music, while interactions involving objects may provide opportunity for shared attention.  相似文献   

10.
To what extent do infants represent the absolute pitches of complex auditory stimuli? Two experiments with 8-month-old infants examined the use of absolute and relative pitch cues in a tone-sequence statistical learning task. The results suggest that, given unsegmented stimuli that do not conform to the rules of musical composition, infants are more likely to track patterns of absolute pitches than of relative pitches. A 3rd experiment tested adults with or without musical training on the same statistical learning tasks used in the infant experiments. Unlike the infants, adult listeners relied primarily on relative pitch cues. These results suggest a shift from an initial focus on absolute pitch to the eventual dominance of relative pitch, which, it is argued, is more useful for both music and speech processing.  相似文献   

11.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

12.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch.  相似文献   

13.
Five‐month‐old infants selectively attend to novel people who sing melodies originally learned from a parent, but not melodies learned from a musical toy or from an unfamiliar singing adult, suggesting that music conveys social information to infant listeners. Here, we test this interpretation further in older infants with a more direct measure of social preferences. We randomly assigned 64 11‐month‐old infants to 1–2 weeks’ exposure to one of two novel play songs that a parent either sang or produced by activating a recording inside a toy. Infants then viewed videos of two new people, each singing one song. When the people, now silent, each presented the infant with an object, infants in both conditions preferentially chose the object endorsed by the singer of the familiar song. Nevertheless, infants’ visual attention to that object was predicted by the degree of song exposure only for infants who learned from the singing of a parent. Eleven‐month‐olds thus garner social information from songs, whether learned from singing people or from social play with musical toys, but parental singing has distinctive effects on infants’ responses to new singers. Both findings support the hypothesis that infants endow music with social meaning. These findings raise questions concerning the types of music and behavioral contexts that elicit infants’ social responses to those who share music with them, and they support suggestions concerning the psychological functions of music both in contemporary environments and in the environments in which humans evolved.  相似文献   

14.
The Puzzle of Absolute Pitch   总被引:2,自引:0,他引:2  
Absolute pitch—the ability to name or produce a note of particular pitch in the absence of a reference note—is generally considered to be extremely rare. However, it has been found that native speakers of two different tone languages—Mandarin and Vietnamese—display a remarkably precise form of absolute pitch in enunciating words. Given these findings, it is proposed that absolute pitch may have evolved as a feature of speech, analogous to other features such as vowel quality. It is also conjectured that tone–language speakers generally acquire this feature during the 1st year of life, in the critical period when infants acquire other features of their native language. For speakers of nontone languages, the acquisition of absolute pitch by rare individuals may be associated with a critical period of unusually long duration, so that it extends to the age at which the child can begin taking music lessons. According to this line of reasoning, the potential for acquiring absolute pitch is universal at birth, and can be realized by giving the infant the opportunity to associate pitches with verbal labels during the 1st year or so of life.  相似文献   

15.
Individuals differ markedly with respect to how well they can imitate pitch through singing and in their ability to perceive pitch differences. We explored whether the use of pitch in one’s native language can account for some of the differences in these abilities. Results from two studies suggest that individuals whose native language is a tone language, in which pitch contributes to word meaning, are better able to imitate (through singing) and perceptually discriminate musical pitch. These findings support the view that language acquisition fine-tunes the processing of critical auditory dimensions in the speech signal and that this fine-tuning can be carried over into nonlinguistic domains.  相似文献   

16.
L. J. Trainor (1996) reported preferences for infant-directed versus infant-absent singing in English in 4-7-month-old hearing infants of English-speaking hearing parents. In this experiment, the author tested preferences for infant-directed singing versus adult-directed singing in 15 two-day-old hearing infants of deaf parents for a Japanese and an English play song. Using a modified visual-fixation-based auditory-preference procedure, the author found that infants looked longer at a visual stimulus when looking produced infant-directed singing as opposed to adult-directed singing. These results suggest that infants prefer infant-directed singing over adult-directed singing and that the preference is present from birth and is not dependent on any specific prenatal or postnatal experience.  相似文献   

17.
Vocalizations of full-term newborns occur in a short latency time during the neonatal period. Contingent response time of preterm babies is still unknown. An increase of preterm babies’ vocalizations following exposure to parental speech was also observed. Mothers and babies co-modulate their vocalizations in preterm dyads. Purpose: To observe temporal features of maternal and infants’ vocalizations in speaking and singing conditions in preterm dyads. Methods: In a NICU mothers (N = 36) were invited to speak and to sing to their preterm infants during Kangaroo Care. Microanalysis of temporal units were performed with ELAN Software. Results and conclusions: Preterm infants vocalize less often while their mothers speak and sing than during baseline and their vocalizations tend to be more alternating in the speaking condition and more overlapping in the singing condition. It is also concluded that preterm infants take more time to respond to maternal speaking than to maternal singing.  相似文献   

18.
Mothers were recorded singing a song of their choice in both a lullaby style and a play-song style to their 6-month-olds. Adult raters identified the play-song-style and lullaby-style versions with 100% accuracy. Play-song-style renditions were rated as being more brilliant, clipped, and rhythmic and as having more smiling and more prominent consonants. Lullaby-style renditions were characterized as being more airy, smooth, and soothing. Adults observed videotapes (without sound) of 6-month-olds listening to alternating lullaby-style and play-song-style trials and performed at above chance levels when determining which music the infants were hearing. Coding analyses revealed that infants focused their attention more toward themselves during lullaby-style trials and more toward the external world during play-song-style trials. These results suggest that singing may be used to regulate infants' states and to communicate emotional information.  相似文献   

19.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

20.
We show that infants' long-term memory representations for melodies are not just reduced to the structural features of relative pitches and durations, but contain surface or performance tempo- and timbre-specific information. Using a head turn preference procedure, we found that after a one week exposure to an old English folk song, infants preferred to listen to a novel folk song, indicating that they remembered the familiarized melody. However, if the tempo (25% faster or slower) or instrument timbre (harp vs. piano) of the familiarized melody was changed at test, infants showed no preference, indicating that they remembered the specific tempo and timbre of the melodies. The results are consistent with an exemplar-based model of memory in infancy rather than one in which structural features are extracted and performance features forgotten.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号