首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Cognitive development》1996,11(2):181-196
The ability to remember sequential order information is an important component in the learning and mastery of many complex cognitive skills. Notably, it is critical for learning language. This study investigated whether infants are capable of remembering the order of words in an English sentence and, especially, whether the structure afforded by natural sentential prosody enhances their ability to do so. This study compares 2-montholds' abilities to detect changes in word order, after a 2-min delay, for sequences spoken as a well-formed sentence versus as two unrelated, but well-formed, sentential fragments. The results indicate that infants exposed to the single sentences were able to detect changes in word order. By comparison, infants exposed to the sentential fragments showed no tendency to detect the same word order changes. Thus, even at two months of age, infants are able to remember the order of spoken words when they are embedded within the coherent prosodic structure of a single well-formed sentence.  相似文献   

2.
韵律特征研究   总被引:1,自引:0,他引:1  
介绍从知觉、认知和语料库分析角度对汉语韵律特征进行的一系列研究。(1)韵律特征知觉:用实验心理学和知觉标注的语料库分析方法,研究汉语语调和音高下倾与降阶问题,语句和语篇中知觉可以区分的韵律层级及相关的声学线索。研究结果支持汉语语调的双线模型理论和语句音高下倾的存在;证明语篇中知觉可以区分的韵律边界是小句、句子和段落,及其知觉相关的声学线索。(2)韵律特征与其他语言学结构的关系:在标注的语料库的基础上,用常规统计方法研究语句常规重音分布规律、语篇信息结构与重音的关系、并用决策树方法研究根据文本信息确定韵律短语边界和焦点的规则。(3)韵律特征在语篇理解中的作用:用实验心理学方法和脑电指标研究韵律对语篇信息整合和指代理解的影响,揭示其作用的认知和神经机制。讨论了这些研究结果对语音工程、语音学理论和心理语言学研究的实践和理论意义  相似文献   

3.
Newborns are able to extract and learn repetition-based regularities from the speech input, that is, they show greater brain activation in the bilateral temporal and left inferior frontal regions to trisyllabic pseudowords of the form AAB (e.g., “babamu”) than to random ABC sequences (e.g., “bamuge”). Whether this ability is specific to speech or also applies to other auditory stimuli remains unexplored. To investigate this, we tested whether newborns are sensitive to regularities in musical tones. Neonates listened to AAB and ABC tones sequences, while their brain activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The paradigm, the frequency of occurrence and the distribution of the tones were identical to those of the syllables used in previous studies with speech. We observed a greater inverted (negative) hemodynamic response to AAB than to ABC sequences in the bilateral temporal and fronto-parietal areas. This inverted response was caused by a decrease in response amplitude, attributed to habituation, over the course of the experiment in the left fronto-temporal region for the ABC condition and in the right fronto-temporal region for both conditions. These findings show that newborns’ ability to discriminate AAB from ABC sequences is not specific to speech. However, the neural response to musical tones and spoken language is markedly different. Tones gave rise to habituation, whereas speech was shown to trigger increasing responses over the time course of the study. Relatedly, the repetition regularity gave rise to an inverted hemodynamic response when carried by tones, while it was canonical for speech. Thus, newborns’ ability to detect repetition is not speech-specific, but it engages distinct brain mechanisms for speech and music.

Research Highlights

  • The ability of newborns’ to detect repetition-based regularities is not specific to speech, but also extends to other auditory modalities.
  • The brain mechanisms underlying speech and music processing are markedly different.
  相似文献   

4.
In this work we ask whether at birth, the human brain responds uniquely to speech, or if similar activation also occurs to a non‐speech surrogate ‘language’. We compare neural activation in newborn infants to the language heard in utero (English), to an unfamiliar language (Spanish), and to a whistled surrogate language (Silbo Gomero) that, while used by humans to communicate, is not speech. Anterior temporal areas of the neonate cortex are activated in response to both familiar and unfamiliar spoken language, but these classic language areas are not activated to the whistled surrogate form. These results suggest that at the time human infants emerge from the womb, the neural preparation for language is specialized to speech.  相似文献   

5.
This study investigates vocal imitation of prosodic contour in ongoing spontaneous interaction with 10- to 13-week-old infants. Audio recordings from naturalistic interactions between 20 mothers and infants were analyzed using a vocalization coding system that extracted the pitch and duration of individual vocalizations. Using these data, the authors categorized a sample of 1,359 vocalizations on the basis of 7 predetermined contours. Pairs of identical successive vocalizations were considered to be imitations if they involved both partners or repetitions if they were produced by the same partner. Results show that not only do mothers and infants imitate and repeat prosodic contour types in the course of vocal interaction but they do so selectively. Indeed, different contours are imitated and repeated by each partner. These findings suggest that imitation and repetition of prosodic contours have specific functions for communication and vocal development in the 3rd month of life.  相似文献   

6.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   

7.
Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.  相似文献   

8.
Familial risk for developmental dyslexia can compromise auditory and speech processing and subsequent language and literacy development. According to the phonological deficit theory, supporting phonological development during the sensitive infancy period could prevent or ameliorate future dyslexic symptoms. Music is an established method for supporting auditory and speech processing and even language and literacy, but no previous studies have investigated its benefits for infants at risk for developmental language and reading disorders. We pseudo-randomized N∼150 infants at risk for dyslexia to vocal or instrumental music listening interventions at 0–6 months, or to a no-intervention control group. Music listening was used as an easy-to-administer, cost-effective intervention in early infancy. Mismatch responses (MMRs) elicited by speech-sound changes were recorded with electroencephalogram (EEG) before (at birth) and after (at 6 months) the intervention and at a 28 months follow-up. We expected particularly the vocal intervention to promote phonological development, evidenced by enhanced speech-sound MMRs and their fast maturation. We found enhanced positive MMR amplitudes in the vocal music listening intervention group after but not prior to the intervention. Other music activities reported by parents did not differ between the three groups, indicating that the group effects were attributable to the intervention. The results speak for the use of vocal music in early infancy to support speech processing and subsequent language development in infants at developmental risk.

Research Highlights

  • Dyslexia-risk infants were pseudo-randomly assigned to a vocal or instrumental music listening intervention at home from birth to 6 months of age.
  • Neural mismatch responses (MMRs) to speech-sound changes were enhanced in the vocal music intervention group after but not prior to the intervention.
  • Even passive vocal music listening in early infancy can support phonological development known to be deficient in dyslexia-risk.
  相似文献   

9.
Human infants use prosodic cues present in speech to extract language regularities, and it has been suggested that this capacity is anchored in more general mechanisms that are shared across mammals. This study explores the extent to which rats can generalize prosodic cues that have been extracted from a training corpus to new sentences and how this discrimination process is affected by the normalization of the sentences when multiple speakers are introduced. Conditions 1 and 2 show rats' abilities to use prosodic cues present in speech, allowing them to discriminate between sentences not previously heard. But this discrimination is not possible when sentences are played backward. Conditions 3 and 4 show that language discrimination by rats is also taxed by the process of speaker normalization. These findings have remarkable parallels with data from human adults, human newborns, and cotton-top tamarins. Implications for speech perception by humans are discussed.  相似文献   

10.
Fourteen native speakers of German heard normal sentences, sentences which were either lacking dynamic pitch variation (flattened speech), or comprised of intonation contour exclusively (degraded speech). Participants were to listen carefully to the sentences and to perform a rehearsal task. Passive listening to flattened speech compared to normal speech produced strong brain responses in right cortical areas, particularly in the posterior superior temporal gyrus (pSTG). Passive listening to degraded speech compared to either normal or flattened speech particularly involved fronto-opercular and subcortical (Putamen, Caudate Nucleus) regions bilaterally. Additionally the Rolandic operculum (premotor cortex) in the right hemisphere subserved processing of neat sentence intonation. As a function of explicit rehearsing sentence intonation we found several activation foci in the left inferior frontal gyrus (Broca's area), the left inferior precentral sulcus, and the left Rolandic fissure. The data allow several suggestions: First, both flattened and degraded speech evoked differential brain responses in the pSTG, particularly in the planum temporale (PT) bilaterally indicating that this region mediates integration of slowly and rapidly changing acoustic cues during comprehension of spoken language. Second, the bilateral circuit active whilst participants receive degraded speech reflects general effort allocation. Third, the differential finding for passive perception and explicit rehearsal of intonation contour suggests a right fronto-lateral network for processing and a left fronto-lateral network for producing prosodic information. Finally, it appears that brain areas which subserve speech (frontal operculum) and premotor functions (Rolandic operculum) coincidently support the processing of intonation contour in spoken sentence comprehension.  相似文献   

11.
The neural network supporting aspects of syntactic, prosodic, and semantic information processing is specified on the basis of two experiments using functional magnetic resonance imaging (fMRI). In these two studies, the presence/absence of lexical-semantic and syntactic information is systematically varied in spoken language stimuli. Inferior frontal and temporal brain areas in the left and the right hemisphere are identified to support different aspects of auditory language processing. Two additional experiments using event-related brain potentials investigate the possible interaction of syntactic and prosodic information, on the one hand, and syntactic and semantic information, on the other. While the first two information types were shown to interact early during processing, the latter two information types do not. Implications for models of auditory language comprehension are discussed.  相似文献   

12.
We propose that infants may learn about the relative order of heads and complements in their language before they know many words, on the basis of prosodic information (relative prominence within phonological phrases). We present experimental evidence that 6–12‐week‐old infants can discriminate two languages that differ in their head direction and its prosodic correlate, but have otherwise similar phonological properties (i.e. French and Turkish). This result supports the hypothesis that infants may use this kind of prosodic information to bootstrap their acquisition of word order.  相似文献   

13.
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12–72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0. Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition.

Research Highlights

  • Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music.
  • Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds.
  • Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
  相似文献   

14.
This study focuses on how the body schema develops during the first months of life, by investigating infants’ motor responses to localized vibrotactile stimulation on their limbs. Vibrotactile stimulation was provided by small buzzers that were attached to the infants’ four limbs one at a time. Four age groups were compared cross‐sectionally (3‐, 4‐, 5‐, and 6‐month‐olds). We show that before they actually reach for the buzzer, which, according to previous studies, occurs around 7–8 months of age, infants demonstrate emerging knowledge about their body's configuration by producing specific movement patterns associated with the stimulated body area. At 3 months, infants responded with an increase in general activity when the buzzer was placed on the body, independently of the vibrator's location. Differentiated topographical awareness of the body seemed to appear around 5 months, with specific responses resulting from stimulation of the hands emerging first, followed by the differentiation of movement patterns associated with the stimulation of the feet. Qualitative analyses revealed specific movement types reliably associated with each stimulated location by 6 months of age, possibly preparing infants’ ability to actually reach for the vibrating target. We discuss this result in relation to newborns’ ability to learn specific movement patterns through intersensory contingency.

Statement of contribution

what is already known on infants’ sensorimotor knowledge about their own bodies
  • 3‐month‐olds readily learn to produce specific limb movements to obtain a desired effect (movement of a mobile).
  • infants detect temporal and spatial correspondences between events involving their own body and visual events.
what the present study adds
  • until 4–5 months of age, infants mostly produce general motor responses to localized touch.
  • this is because in the present study, infants could not rely on immediate contingent feedback.
  • we propose a cephalocaudal developmental trend of topographic differentiation of body areas.
  相似文献   

15.
韵律边界加工与言语理解紧密相关, 最近十几年来逐渐成为心理学和语言学的研究焦点。韵律系统包含若干由小到大的韵律单位, 不同单位的韵律成分其边界强度不同, 表现在音高、延宕和停顿三个声学线索上的参数也不同。句子的听力理解过程中, 听话人运用声学线索感知权重策略对韵律边界的声学线索进行加工。从神经层面上来看, 对于韵律边界的加工, 大脑显示出独立且特异性的神经机制。韵律边界的加工能力在婴儿出生后随年龄的增长而发展, 到了老年阶段则逐渐退化, 而且似乎能够对二语迁移。未来, 需要扩大对韵律边界声学表现的考查范围, 进一步明确韵律边界的加工过程, 进一步厘清韵律边界加工和句法加工之间的关系, 进一步关注二语者韵律边界加工能力的发展。  相似文献   

16.
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.  相似文献   

17.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

18.
There is converging evidence that infants are sensitive to prosodic cues from birth onwards and use this kind of information in their earliest steps into the acquisition of words and syntactic regularities of their target language. Regarding word segmentation, it has been found that English-learning infants segment trochaic words by 7.5 months of age, and iambic words only by 10.5 months of age [Jusczyk, P. W., Houston, D. M., & Newsome, M. (1999). The beginnings of word segmentation in English-learning infants. Cognitive Psychology, 39, 159–207]. The question remains how to interpret this finding in relation to results showing that English-learning infants develop a preference for trochaic over iambic words between 6 and 9 months of age [Jusczyk, P. W., Cutler, A., & Redanz, N. (1993). Preference for the predominant stress patterns of English words. Child Development, 64, 675–687]. In the following, we report the results of four experiments using the headturn preference procedure (HPP) to explore the trochaic bias issue in German- and French-learning infants. For German, a trochaic preference was found at 6 but not at 4 months, suggesting an emergence of this preference between both ages (Experiments 1 and 2). For French, 6-month-old infants did not show a preference for either stress pattern (Experiment 3) while they were found to discriminate between the two stress patterns (Experiment 4). Our findings are the first to demonstrate that the trochaic bias is acquired by 6 months of age, is language specific and can be predicted by the rhythmic properties of the language in acquisition. We discuss the implications of this very early acquisition for our understanding of the emergence of segmentation abilities.  相似文献   

19.
Research on early signs of autism in social interactions often focuses on infants’ motor behaviors; few studies have focused on speech characteristics. This study examines infant‐directed speech of mothers of infants later diagnosed with autism (LDA; n = 12) or of typically developing infants (TD; n = 11) as well as infants’ productions (13 LDA, 13 TD). Since LDA infants appear to behave differently in the first months of life, it can affect the functioning of dyadic interactions, especially the first vocal productions, sensitive to expressiveness and emotions sharing. We assumed that in the first 6 months of life, prosodic characteristics (mean duration, mean pitch, and intonative contour types) will be different in dyads with autism. We extracted infants’ and mothers’ vocal productions from family home movies and analyzed the mean duration and pitch as well as the pitch contours in interactive episodes. Results show that mothers of LDA infants use relatively shorter productions as compared to mothers talking to TD infants. LDA infants’ productions are not different in duration or pitch, but they use less complex modulated productions (i.e., those with more than two melodic modulations) than do TD. Further studies should focus on developmental profiles in the first year, analyzing prosody monthly.  相似文献   

20.
Brain processes underlying spoken language comprehension comprise auditory encoding, prosodic analysis and linguistic evaluation. Auditory encoding usually activates both hemispheres while language-specific stages are lateralized: analysis of prosodic cues are right-lateralized while linguistic evaluation is left-lateralized. Here, we investigated to what extent the absence of prosodic information influences lateralization. MEG brain-responses indicated that syntactic violations lead to early bi-lateral brain responses for syntax violations. When the pitch of sentences was flattened to diminish prosodic cues, the brain's syntax response was lateralized to the right hemisphere, indicating that the missing pitch was generated automatically by the brain when it was absent. This represents a Gestalt phenomenon, since we perceive more than is actually presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号