首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Prosodic cues drive speech segmentation and guide syllable discrimination. However, less is known about the attentional mechanisms underlying an infant's ability to benefit from prosodic cues. This study investigated how 6- to 8-month-old Italian infants allocate their attention to strong vs. weak syllables after familiarization with four repeats of a single CV sequence with alternating strong and weak syllables (different syllables on each trial). In the discrimination test-phase, either the strong or the weak syllable was replaced by a pure tone matching the suprasegmental characteristics of the segmental syllable, i.e., duration, loudness and pitch, whereas the familiarized stimulus was presented as a control. By using an eye-tracker, attention deployment (fixation times) and cognitive resource allocation (pupil dilation) were measured under conditions of high and low saliency that corresponded to the strong and weak syllabic changes, respectively. Italian learning infants were found to look longer and also to show, through pupil dilation, more attention to changes in strong syllable replacement rather than weak syllable replacement, compared to the control condition. These data offer insights into the strategies used by infants to deploy their attention towards segmental units guided by salient prosodic cues, like the stress pattern of syllables, during speech segmentation.  相似文献   

2.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

3.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

4.
Empirical evidence for a functional role of syllables in visual word processing is abundant, however it remains rather heterogeneous. The present study aims to further specify the role of syllables and the cognitive accessibility of syllabic information in word processing. The first experiment compared performance across naming and lexical decision tasks by manipulating the number of syllables in words and non-words. Results showed a syllable number effect in both the naming task and the lexical decision task. The second experiment introduced a stimulus set consisting of isolated syllabic and non-syllabic trigrams. Syllable frequency was manipulated in a naming and in a decision task requiring participants to decide on the syllabic status of letter strings. Results showed faster responses for syllables than for non-syllables in both tasks. Syllable frequency effects were observed in the decision task. In summary, the results from these manipulations of different types of syllable information confirm an important role of syllabic units in both recognition and production.  相似文献   

5.
In order to acquire language, infants must extract its building blocks—words—and master the rules governing their legal combinations from speech. These two problems are not independent, however: words also have internal structure. Thus, infants must extract two kinds of information from the same speech input. They must find the actual words of their language. Furthermore, they must identify its possible words, that is, the sequences of sounds that, being morphologically well formed, could be words. Here, we show that infants’ sensitivity to possible words appears to be more primitive and fundamental than their ability to find actual words. We expose 12- and 18-month-old infants to an artificial language containing a conflict between statistically coherent and structurally coherent items. We show that 18-month-olds can extract possible words when the familiarization stream contains marks of segmentation, but cannot do so when the stream is continuous. Yet, they can find actual words from a continuous stream by computing statistical relationships among syllables. By contrast, 12-month-olds can find possible words when familiarized with a segmented stream, but seem unable to extract statistically coherent items from a continuous stream that contains minimal conflicts between statistical and structural information. These results suggest that sensitivity to word structure is in place earlier than the ability to analyze distributional information. The ability to compute nontrivial statistical relationships becomes fully effective relatively late in development, when infants have already acquired a considerable amount of linguistic knowledge. Thus, mechanisms for structure extraction that do not rely on extensive sampling of the input are likely to have a much larger role in language acquisition than general-purpose statistical abilities.  相似文献   

6.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

7.
Comparisons between infant-directed and adult-directed speech were conducted to determine whether word-final syllables are highlighted in infant-directed speech. Samples of adult-directed and infant-directed speech were collected from 8 mothers of 6-month-old and 8 mothers of 9- month-old infants. Mothers were asked to label seven objects both to an experimenter and to their infant. Duration, pitch, and amplitude were measured for whole words and for each of the target word syllables. As in prior research, the infant-directed targets were higher pitched and longer than adult-directed targets. The results also extend beyond previous results in showing that lengthening of final syllables in infant-directed speech is particularly exaggerated. Results of analyses comparing word-final versus nonfinal unstressed syllables in utterance-medial position in infant-directed speech showed that lengthening of unstressed word-final syllables occurs even in utterance-internal positions. These results could suggest a mechanism for proposals that word-final syllables are perceptually salient to young children.  相似文献   

8.
Recent evidence suggests division of labor in phonological analysis underlying speech recognition. Adults and children appear to decompose the speech stream into phoneme‐relevant information and into syllable stress. Here we investigate whether both speech processing streams develop from a common path in infancy, or whether there are two separate streams from early on. We presented stressed and unstressed syllables (spoken primes) followed by initially stressed early learned disyllabic German words (spoken targets). Stress overlap and phoneme overlap between the primes and the initial syllable of the targets varied orthogonally. We tested infants 3, 6 and 9 months after birth. Event‐related potentials (ERPs) revealed stress priming without phoneme priming in the 3‐month‐olds; phoneme priming without stress priming in the 6‐month‐olds; and phoneme priming, stress priming as well as an interaction of both in 9‐month‐olds. In general the present findings reveal that infants start with separate processing streams related to syllable stress and to phoneme‐relevant information; and that they need to learn to merge both aspects of speech processing. In particular the present results suggest (i) that phoneme‐free prosodic processing dominates in early infancy; (ii) that prosody‐free phoneme processing dominates in middle infancy; and (iii) that both types of processing are operating in parallel and can be merged in late infancy.  相似文献   

9.
Levy J  Yovel G  Bean M 《Brain and language》2003,87(3):432-440
The influence of lateralized unattended stimuli on the processing of attended stimuli in the opposite visual field can shed light on the nature of information that is transferred between hemispheres. On a cued bilateral task, participants tried to identify a syllable in the attended visual field, which elicits a left hemisphere (LH) advantage and different processing strategies by the two hemispheres. The same or a different syllable or a neutral stimulus appeared in the unattended field. Transmission of unattended syllable codes between hemispheres is symmetric, as revealed by equal interference for the two visual fields. The LH is more accurate than the RH in encoding unattended syllables, as indicated by facilitation in the left but not right visual field and a greater frequency of identifiable intrusions into the left than right field. However, asymmetric encoding strategies are different for attended and unattended syllables.  相似文献   

10.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   

11.
We examined whether the orientation of the face influences speech perception in face-to-face communication. Participants identified auditory syllables, visible syllables, and bimodal syllables presented in an expanded factorial design. The syllables were /ba/, /va/, /δa/, or /da/. The auditory syllables were taken from natural speech whereas the visible syllables were produced by computer animation of a realistic talking face. The animated face was presented either as viewed in normal upright orientation or inverted orientation (180° frontal rotation). The central intent of the study was to determine if an inverted view of the face would change the nature of processing bimodal speech or simply influence the information available in visible speech. The results with both the upright and inverted face views were adequately described by the fuzzy logical model of perception (FLMP). The observed differences in the FLMP’s parameter values corresponding to the visual information indicate that inverting the view of the face influences the amount of visible information but does not change the nature of the information processing in bimodal speech perception  相似文献   

12.
Recent findings have revealed that very preterm neonates already show the typical brain responses to place of articulation changes in stop consonants, but data on their sensitivity to other types of phonetic changes remain scarce. Here, we examined the impact of 7–8 weeks of extra‐uterine life on the automatic processing of syllables in 20 healthy moderate preterm infants (mean gestational age at birth 33 weeks) matched in maturational age with 20 full‐term neonates, thus differing in their previous auditory experience. This design allows elucidating the contribution of extra‐uterine auditory experience in the immature brain on the encoding of linguistically relevant speech features. Specifically, we collected brain responses to natural CV syllables differing in three dimensions using a multi‐feature mismatch paradigm, with the syllable/ba/ as the standard and three deviants: a pitch change, a vowel change to/bo/ and a consonant voice‐onset time (VOT) change to/pa/. No significant between‐group differences were found for pitch and consonant VOT deviants. However, moderate preterm infants showed attenuated responses to vowel deviants compared to full terms. These results suggest that moderate preterm infants' limited experience with low‐pass filtered speech prenatally can hinder vowel change detection and that exposure to natural speech after birth does not seem to contribute to improve this capacity. These data are in line with recent evidence suggesting a sequential development of a hierarchical functional architecture of speech processing that is highly sensitive to early auditory experience.  相似文献   

13.
Cholin J  Levelt WJ  Schiller NO 《Cognition》2006,99(2):205-235
In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether high-frequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.  相似文献   

14.
汉语加工脑神经机制研究的新进展   总被引:5,自引:0,他引:5  
刘丽虹  张积家  谭力海 《心理科学》2004,27(5):1165-1167
随着脑科学研究手段的改进.人们对汉语加工的脑神经机制有了更多、更深入的了解。大量研究表明,汉语加工有不同于拼音语言加工的脑机制,在加工汉语时.激活的脑区和在加工拼音语言时激活的脑区不同.母语为汉语的人和母语为拼音语言的人在脑的形态结构上也存在显著差异。对汉语加工脑机制的研究有利于对音素和音节加工的皮层表征的分离,有助于揭示语言加工脑机制的普遍性和特殊性。  相似文献   

15.
Newborns are able to extract and learn repetition-based regularities from the speech input, that is, they show greater brain activation in the bilateral temporal and left inferior frontal regions to trisyllabic pseudowords of the form AAB (e.g., “babamu”) than to random ABC sequences (e.g., “bamuge”). Whether this ability is specific to speech or also applies to other auditory stimuli remains unexplored. To investigate this, we tested whether newborns are sensitive to regularities in musical tones. Neonates listened to AAB and ABC tones sequences, while their brain activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The paradigm, the frequency of occurrence and the distribution of the tones were identical to those of the syllables used in previous studies with speech. We observed a greater inverted (negative) hemodynamic response to AAB than to ABC sequences in the bilateral temporal and fronto-parietal areas. This inverted response was caused by a decrease in response amplitude, attributed to habituation, over the course of the experiment in the left fronto-temporal region for the ABC condition and in the right fronto-temporal region for both conditions. These findings show that newborns’ ability to discriminate AAB from ABC sequences is not specific to speech. However, the neural response to musical tones and spoken language is markedly different. Tones gave rise to habituation, whereas speech was shown to trigger increasing responses over the time course of the study. Relatedly, the repetition regularity gave rise to an inverted hemodynamic response when carried by tones, while it was canonical for speech. Thus, newborns’ ability to detect repetition is not speech-specific, but it engages distinct brain mechanisms for speech and music.

Research Highlights

  • The ability of newborns’ to detect repetition-based regularities is not specific to speech, but also extends to other auditory modalities.
  • The brain mechanisms underlying speech and music processing are markedly different.
  相似文献   

16.
Fourteen native speakers of German heard normal sentences, sentences which were either lacking dynamic pitch variation (flattened speech), or comprised of intonation contour exclusively (degraded speech). Participants were to listen carefully to the sentences and to perform a rehearsal task. Passive listening to flattened speech compared to normal speech produced strong brain responses in right cortical areas, particularly in the posterior superior temporal gyrus (pSTG). Passive listening to degraded speech compared to either normal or flattened speech particularly involved fronto-opercular and subcortical (Putamen, Caudate Nucleus) regions bilaterally. Additionally the Rolandic operculum (premotor cortex) in the right hemisphere subserved processing of neat sentence intonation. As a function of explicit rehearsing sentence intonation we found several activation foci in the left inferior frontal gyrus (Broca's area), the left inferior precentral sulcus, and the left Rolandic fissure. The data allow several suggestions: First, both flattened and degraded speech evoked differential brain responses in the pSTG, particularly in the planum temporale (PT) bilaterally indicating that this region mediates integration of slowly and rapidly changing acoustic cues during comprehension of spoken language. Second, the bilateral circuit active whilst participants receive degraded speech reflects general effort allocation. Third, the differential finding for passive perception and explicit rehearsal of intonation contour suggests a right fronto-lateral network for processing and a left fronto-lateral network for producing prosodic information. Finally, it appears that brain areas which subserve speech (frontal operculum) and premotor functions (Rolandic operculum) coincidently support the processing of intonation contour in spoken sentence comprehension.  相似文献   

17.
This paper reports a finding of hemispheric brain asymmetry for speech in short-gestation infants (mean gestational AGE = 36 weeks). Using a new measure—degree of reduction in limb tremors following exposure to speech stimuli compared to two control groups, one hearing orchestral music, the other no patterned stimuli—we found that speech disproportionately affected right limb movements. It is not clear whether the effect is due to asymmetries in cortical or subcortical processing. This provides evidence against the notion that brain specialization for language functions necessarily appears over time; rather, specialization for some functions (e.g., speech reception) must be present at birth.  相似文献   

18.
Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.  相似文献   

19.
This paper reports a finding of hemispheric brain asymmetry for speech in short-gestation infants (mean gestational age = 36 weeks). Using a new measure—degree of reduction in limb tremors following exposure to speech stimuli compared to two control groups, one hearing orchestral music, the other no patterned stimuli—we found that speech disproportionately affected right limb movements. It is not clear whether the effect is due to asymmetries in cortical or subcortical processing. This provides evidence against the notion that brain specialization for language functions necessarily appears over time; rather, specialization for some functions (e.g., speech reception) must be present at birth.  相似文献   

20.
赵荣  王小娟  杨剑峰 《心理学报》2016,48(8):915-923
探讨超音段(如声调)与音段信息的共同作用机制, 对口语词汇识别研究具有重要的理论意义。有研究探讨了声调在口语词汇语义通达阶段的作用, 但在相对早期的音节感知阶段, 声调与声母、韵母的共同作用机制还缺乏系统的认识。本研究采用oddball实验范式, 通过两个行为实验考察了声调在汉语音节感知中的作用。实验1发现检测声调和声母变化的时间没有差异, 均比检测韵母变化的时间长, 表明在汉语音节感知中对声调的敏感性不及韵母。实验2发现声母和韵母的组合并没有明显优于对韵母的觉察, 但声调与声母或韵母的同时变化都促进了被试对偏差刺激的觉察, 表明声调通过与声母、韵母的结合来共同影响汉语音节的感知加工。研究结果在认知行为层面为声调在音节感知中的作用机制提供了直接的实验证据, 为进一步探讨超音段与音段信息共同作用的认知神经机制提供了基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号