Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.
Research Highlights
In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
Newborns are able to extract and learn repetition-based regularities from the speech input, that is, they show greater brain activation in the bilateral temporal and left inferior frontal regions to trisyllabic pseudowords of the form AAB (e.g., “babamu”) than to random ABC sequences (e.g., “bamuge”). Whether this ability is specific to speech or also applies to other auditory stimuli remains unexplored. To investigate this, we tested whether newborns are sensitive to regularities in musical tones. Neonates listened to AAB and ABC tones sequences, while their brain activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The paradigm, the frequency of occurrence and the distribution of the tones were identical to those of the syllables used in previous studies with speech. We observed a greater inverted (negative) hemodynamic response to AAB than to ABC sequences in the bilateral temporal and fronto-parietal areas. This inverted response was caused by a decrease in response amplitude, attributed to habituation, over the course of the experiment in the left fronto-temporal region for the ABC condition and in the right fronto-temporal region for both conditions. These findings show that newborns’ ability to discriminate AAB from ABC sequences is not specific to speech. However, the neural response to musical tones and spoken language is markedly different. Tones gave rise to habituation, whereas speech was shown to trigger increasing responses over the time course of the study. Relatedly, the repetition regularity gave rise to an inverted hemodynamic response when carried by tones, while it was canonical for speech. Thus, newborns’ ability to detect repetition is not speech-specific, but it engages distinct brain mechanisms for speech and music.
Research Highlights
The ability of newborns’ to detect repetition-based regularities is not specific to speech, but also extends to other auditory modalities.
The brain mechanisms underlying speech and music processing are markedly different.
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12–72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0. Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition.
Research Highlights
Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music.
Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds.
Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .
Research Highlights
Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
Prosody is the fundamental organizing principle of spoken language, carrying lexical, morphosyntactic, and pragmatic information. It, therefore, provides highly relevant input for language development. Are infants sensitive to this important aspect of spoken language early on? In this study, we asked whether infants are able to discriminate well-formed utterance-level prosodic contours from ill-formed, backward prosodic contours at birth. This deviant prosodic contour was obtained by time-reversing the original one, and super-imposing it on the otherwise intact segmental information. The resulting backward prosodic contour was thus unfamiliar to the infants and ill-formed in French. We used near-infrared spectroscopy (NIRS) in 1-3-day-old French newborns (n = 25) to measure their brain responses to well-formed contours as standards and their backward prosody counterparts as deviants in the frontal, temporal, and parietal areas bilaterally. A cluster-based permutation test revealed greater responses to the Deviant than to the Standard condition in right temporal areas. These results suggest that newborns are already capable of detecting utterance-level prosodic violations at birth, a key ability for breaking into the native language, and that this ability is supported by brain areas similar to those in adults.
Research Highlights
At birth, infants have sophisticated speech perception abilities.
Prosody may be particularly important for early language development.
We show that newborns are already capable of discriminating utterance-level prosodic contours.
This discrimination can be localized to the right hemisphere of the neonate brain.
The temporal organization of sounds used in social contexts can provide information about signal function and evoke varying responses in listeners (receivers). For example, music is a universal and learned human behavior that is characterized by different rhythms and tempos that can evoke disparate responses in listeners. Similarly, birdsong is a social behavior in songbirds that is learned during critical periods in development and used to evoke physiological and behavioral responses in receivers. Recent investigations have begun to reveal the breadth of universal patterns in birdsong and their similarities to common patterns in speech and music, but relatively little is known about the degree to which biological predispositions and developmental experiences interact to shape the temporal patterning of birdsong. Here, we investigated how biological predispositions modulate the acquisition and production of an important temporal feature of birdsong, namely the duration of silent pauses (“gaps”) between vocal elements (“syllables”). Through analyses of semi-naturally raised and experimentally tutored zebra finches, we observed that juvenile zebra finches imitate the durations of the silent gaps in their tutor's song. Further, when juveniles were experimentally tutored with stimuli containing a wide range of gap durations, we observed biases in the prevalence and stereotypy of gap durations. Together, these studies demonstrate how biological predispositions and developmental experiences differently affect distinct temporal features of birdsong and highlight similarities in developmental plasticity across birdsong, speech, and music.
Research Highlights
The temporal organization of learned acoustic patterns can be similar across human cultures and across species, suggesting biological predispositions in acquisition.
We studied how biological predispositions and developmental experiences affect an important temporal feature of birdsong, namely the duration of silent intervals between vocal elements (“gaps”).
Semi-naturally and experimentally tutored zebra finches imitated the durations of gaps in their tutor's song and displayed some biases in the learning and production of gap durations and in gap variability.
These findings in the zebra finch provide parallels with the acquisition of temporal features of speech and music in humans.
ABSTRACTPrevious accounts of International Relations research have extensively focused on deontological ethics in analysing Responsibility to Protect (R2P). At the same time, discourse ethics – along with Jürgen Habermas’ theory of ideal speech situation – has been overlooked. This article argues that the R2P process has gradually moved toward the Habermasian ideal speech situation. The Habermasian approach also provides a useful theoretical framework to understand the new, more inclusive and critical, forums of communication and initiatives set in motion by emerging non-Western norm-entrepreneurs in the R2P process, notably the Responsibility while Protecting (RwP) initiated by Brazil in 2011. From the perspective of discourse ethics, RwP could be understood as a cosmopolitan harm principle designed to manage the potentially harmful side-effects of the application of R2P. The article further argues that, despite the current paradigm shift of norm-entrepreneurship on R2P from deontological ethics to discourse ethics, it has thus far only partially fulfilled the criteria of an ideal speech situation. 相似文献
Background: One of the most influential factors that affect the quality of life of transgender individuals is whether they can be perceived by others to “pass” in their felt gender. Voice and communication style are two important identifying dimensions of gender and many transgender individuals wish to acquire a voice that matches their gender. Evidence shows that few transgender individuals access voice therapy, and that this is caused by their concerns about stigmatization or negative past experiences within healthcare services. In order to address the negative experiences faced by transgender populations we need a better understanding of healthcare services’ current levels of knowledge and LGBT awareness. Some studies of Speech–Language Therapists’ (SLTs’) experience and confidence working with transgender individuals have recently been undertaken in the United States (US). However, little research has been carried out in Asia.
Aims: To investigate Taiwanese SLTs’ knowledge, attitudes and experiences of providing transgender individuals with relevant therapy.
Method: A cross-sectional self-administered web-based survey hosted on the Qualtrics platform was delivered to 140 Taiwanese SLTs.
Results: Taiwanese SLTs were, (i) more familiar with the terminology used to address “lesbian, gay, and bisexual groups” than with “transgender” terminology, (ii) generally positive in their attitudes toward transgender individuals, and (iii) comfortable about providing clinical services to transgender clients. However, the majority of participants did not feel that they were sufficiently skilled in working with transgender individuals, even though most believed that providing them with voice and communication services fell within the SLT scope of practice.
Conclusion: It is important for clinicians to both be skilled in transgender voice and communication therapy and to be culturally competent when providing services to transgender individuals. This study recommends that cultural competence relating to gender and sexual minority groups should be addressed in SLTs’ university education as well as in their continuing educational programs. 相似文献