首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Infant-directed maternal speech is an important component of infants’ linguistic input. However, speech from other speakers and speech directed to others constitute a large amount of the linguistic environment. What are the properties of infant-directed speech that differentiate it from other components of infants’ speech environment? To what extent should these other aspects be considered as part of the linguistic input? This review examines the characteristics of the speech input to preverbal infants, including phonological, morphological, and syntactic characteristics, specifically how these properties might support language development. While maternal, infant-directed speech is privileged in the input, other aspects of the environment, such as adult-directed speech, may also play a role. Furthermore, the input is variable in nature, dependent on the age and linguistic development of the infant, the social context, and the interaction between the infant and speakers in the environment.  相似文献   

2.
Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.  相似文献   

3.
Räsänen O 《Cognition》2011,(2):149-176
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this work, a computational model for word segmentation and learning of primitive lexical items from continuous speech is presented. The model does not utilize any a priori linguistic or phonemic knowledge such as phones, phonemes or articulatory gestures, but computes transitional probabilities between atomic acoustic events in order to detect recurring patterns in speech. Experiments with the model show that word segmentation is possible without any knowledge of linguistically relevant structures, and that the learned ungrounded word models show a relatively high selectivity towards specific words or frequently co-occurring combinations of short words.  相似文献   

4.
A detailed acoustic analysis of timing, intensity, and fundamental frequency (F0) at different levels of linguistic structure was conducted on the speech output of a Broca's aphasic who was a native speaker of Thai. Timing was measured with respect to syllables, phrases, and sentences in connected speech. Intensity variation at the sentence level was measured in connected speech. F0 variation associated with the five Thai tones was measured in both isolated words and connected speech. Results indicated that timing was differentially impaired depending upon complexity of articulatory gesture and size of the linguistic structure. Timing, as well as intensity, was aberrant at the sentence level. In contrast, F0 contours of the five tones were spared at all levels of linguistic structure. Findings are interpreted to support the view that dysprosody in Broca's aphasia is more applicable to speech timing than to F0.  相似文献   

5.
Recent studies have shown that the presentation of concurrent linguistic context can lead to highly efficient performance in a standard conjunction search task by the induction of an incremental search strategy (Spivey, Tyler, Eberhard, & Tanenhaus, 2001). However, these findings were obtained under anomalously slow speech rate conditions. Accordingly, in the present study, the effects of concurrent linguistic context on visual search performance were compared when speech was recorded at both a normal rate and a slow rate. The findings provided clear evidence that the visual search benefit afforded by concurrent linguistic context was contingent on speech rate, with normal speech producing a smaller benefit. Overall, these findings have important implications for understanding how linguistic and visual processes interact in real time and suggest a disparity in the temporal resolution of speech comprehension and visual search processes.  相似文献   

6.
The language of some patients diagnosed as schizophrenic is apparently caused by a disruption in the ability to order linguistic elements into meaningful structures. This disruption affects different levels of language at different times, even in the same patient, thus causing six definable characteristics of schizophrenic speech. These characteristics are discoverable only through a linguistic analysis of a corpus of such speech.  相似文献   

7.
Two experiments in which time was restored to artificially accelerated (time-compressed) speech are reported. Experiment 1 showed that although both young and older adults' recall of the speech benefited from the restoration of time, time restoration failed to boost the older adults to their baseline levels for unaltered speech. In Experiment 2, either 100% or 125% of lost time was restored by inserting pauses, either at linguistic boundaries or at random points within the passages. Experiment 2 showed that the beneficial effects of time restoration depended on where processing time was inserted, as well as how much time was restored. Results are interpreted in terms of age-related slowing in speech processing moderated by preserved linguistic knowledge and short-term conceptual memory.  相似文献   

8.
Theories of language production propose that utterances are constructed by a mechanism that separates linguistic content from linguistic structure, Linguistic content is retrieved from the mental lexicon, and is then inserted into slots in linguistic structures or frames. Support for this kind of model at the phonological level comes from patterns of phonological speech errors. W present an alternative account of these patterns using a connectionist or parallel distributed proceesing (PDP) model that learns to produce sequences of phonological features. The model's errors exhibit some of the properties of human speech errors, specifically, properties that have been attributed to the action of phonological rules, frames, or other structural generalizations.  相似文献   

9.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

10.
Language acquisition involves more than learning the abstract structures of linguistic competence. The child also has to learn how to use linguistic structures appropriately. In this paper, the speech act is proposed as the unit of analysis for studying the pragmatics of early child language. The results of a study of children's uses of single-word utterances are reported, and the data are analyzed in terms of “primitive speech acts.”  相似文献   

11.
The author reports on a series of integrated studies on melodic contours in infant-directed (ID) speech. ID melodies in speech are taken as an instructive example of intuitive parenting in order to review current evidence on its forms, functions and determinants. The forms and functions of melodic prototypes are compared in terms of universal properties and individual and/or cultural variability across samples of German, Chinese and American mothers, and German mothers and fathers with their 2- and 3-month-old infants. Microanalyses of interactional contexts show that forms and functions of ID melodies are intimately related to typical dimensions of intuitive caregiving–arousing/soothing, turnyielding/turn-closing, approving/disapproving. The communicative functions of ID melodies as both categorical and graded signals are discussed with respect to the current knowledge on infant responses to ID speech and on early speech perception. According to a comprehensive longitudinal study of ID speech in relation to stages of infant vocalization, ID speech results from fine-tuned adjustments in various prosodic and linguistic features to developmental changes in infants' perceptual and vocal competence. ID melodies evidently have the potential to draw infant attention to caregivers' speech, to regulate arousal and affect in infants, to provide models for imitation, to guide infants in practising communicative subroutines and to mediate linguistic information. Current evidence suggests that the melodies in caregivers' speech provide a species-specific guidance towards language acquisition.  相似文献   

12.
Process and strategy in memory for speech among younger and older adults   总被引:1,自引:0,他引:1  
Younger and older adults listened to and immediately recalled short passages of speech that varied in the rate of presentation and in the degree of linguistic and prosodic curing. Although older adults showed a differential decrease in recall performance as a function of increasing speech rate, age differences in recall were reduced by the presence of linguistic and prosodic cues. Under conditions of optimum linguistic redundancy, older adults were also found to add more words and to make more meaning-producing reconstructions in recall. Differences in overall performance are accounted for in terms of age-related changes in working memory processing and strategy utilization.  相似文献   

13.
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as “mora” is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.  相似文献   

14.
15.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.  相似文献   

16.
The ability to appropriately reciprocate or compensate a partner's communicative response represents an essential element of communicative competence. Previous research indicates that as children grow older, their speech levels reflect greater adaptation relative to their partner's speech. In this study, we argue that patterns of adaptation are related to specific linguistic and pragmatic abilities, such as verbal responsiveness, involvement in the interaction, and the production of relatively complex syntactic structures. Thirty-seven children (3–6 years of age) individually interacted with an adult for 20 to 30 minutes. Adaptation between child and adult was examined among conversational floortime, response latency, and speech rate. Three conclusions were drawn from the results of this investigation. First, by applying time-series analysis to the interactants' speech behaviors within each dyad, individual measures of the child's adaptations to the adult's speech can be generated. Second, consistent with findings in the adult domain, these children generally reciprocated changes in the adult's speech rate and response latency. Third, there were differences in degree and type of adaptation within specific dyads. Chronological age was not useful in accounting for this individual variation, but specific linguistic and social abilities were. Implications of these findings for the development of communicative competence and for the study of normal versus language-delayed speech were discussed.  相似文献   

17.
18.
Speaking fundamental frequency (SFF), the average fundamental frequency (lowest frequency of a complex periodic sound) measured over the speaking time of a vocal or speech task, is a basic acoustic measure in clinical evaluation and treatment of voice disorders. Currently, there are few data on acoustic characteristics of different sociolinguistic groups, and no published data on the fundamental frequency characteristics of Arabic speech. The purpose of this study was to obtain preliminary data on the SFF characteristics of a group of normal speaking, young Arabic men. 15 native Arabic men (M age = 23.5 yr., SD=2.5) as participants received identical experimental treatment. Four speech samples were collected from each one, Arabic reading, Arabic spontaneous speech, English reading, and English spontaneous speech. Speaking samples, analyzed using the Computerized Speech Lab, showed no significant difference for mean SFF between language and type of speech and none for mean SFF between languages. A significant difference in the mean SFF was found between the types of speech. The SFF used during reading was significantly higher than that for spontaneous speech. Also Arabic men had higher SFF values than those previously reported for young men in other linguistic groups. SFF then might differ among linguistic, dialectical, and social groups and such data may provide clinicians information useful in evaluation and management of voice.  相似文献   

19.
Bilingual and monolingual infants differ in how they process linguistic aspects of the speech signal. But do they also differ in how they process non‐linguistic aspects of speech, such as who is talking? Here, we addressed this question by testing Canadian monolingual and bilingual 9‐month‐olds on their ability to learn to identify native Spanish‐speaking females in a face‐voice matching task. Importantly, neither group was familiar with Spanish prior to participating in the study. In line with our predictions, bilinguals succeeded in learning the face‐voice pairings, whereas monolinguals did not. We consider multiple explanations for this finding, including the possibility that simultaneous bilingualism enhances perceptual attentiveness to talker‐specific speech cues in infancy (even in unfamiliar languages), and that early bilingualism delays perceptual narrowing to language‐specific talker recognition cues. This work represents the first evidence that multilingualism in infancy affects the processing of non‐linguistic aspects of the speech signal, such as talker identity.  相似文献   

20.
We examined how the type of masker presented in the background affected the extent to which visual information enhanced speech recognition, and whether the effect was dependent on or independent of age and linguistic competence. In the present study, young speakers of English as a first language (YEL1) and English as a second language (YEL2), as well as older speakers of English as a first language (OEL1), were asked to complete an audio (A) and an audiovisual (AV) speech recognition task in which they listened to anomalous target sentences presented against a background of one of three masker types (noise, babble, and competing speech). All three main effects were found to be statistically significant (group, masker type, A vs. AV presentation type). Interesting two-way interactions were found between masker type and group and between masker type and presentation type; however, no interactions were found between group (age and/or linguistic competence) and presentation type (A vs. AV). The results of this study, while they shed light on the effect of masker type on the AV advantage, suggest that age and linguistic competence have no significant effects on the extent to which a listener is able to use visual information to improve speech recognition in background noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号