首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A series of papers appearing in Brain and Language ask whether schizophrenic language irregularities can be understood in linguistic terms. This literature is reviewed and the contrary positions of different authors are highlighted. The clinical presentation of a schizophrenic male is described. In a single interview he produced a set of paragrammatical errors which are noteworthy insofar as they indicate sustained epochs of diminished expressivity. In this sense, they differ from schizophasic deviance, which is described by Lecours and Vanier-Clement (Brain and Language, 3, 516-565, 1976) as an enhanced expressivity cooccurring with intact language competence. They are also partially decodable, which distinguishes them from the schizophrenic segments discussed by Chaika. Analyses of the paragrammatisms indicate disruptions at three discrete representational levels. One involves the formation of abstract speaker intentions, while the second organizes syntagms into some serial form, and the third level takes content words belonging to a particular syntagm and positions them in a syntactic frame. A microgenic model of these representational planes is proposed that is based on the theoretical perspective of Brown, as well as Garrett's investigations of normal speech errors. The model is justified insofar as the paragrammatisms indicate "linguistic regressions" back to more "thought-like" linguistic representations. Moreover, a recapitulation of specific linguistic mappings is demonstrated to occur between processing levels. This microgenetic model represents an extension of previous work in aphasiology insofar as it targets combinatorial rather than selectional processes as primary planes of disruption.  相似文献   

2.
Agrammatic aphasia is characterized by severely reduced grammatical structure in spoken and written language, often accompanied by apparent insensitivity to grammatical structure in comprehension. Does agrammatism represent loss of linguistic competence or rather performance factors such as memory or resource limitations? A considerable body of evidence supports the latter hypothesis in the domain of comprehension. Here we present the first strong evidence for the performance hypothesis in the domain of production: an augmentative communication system that markedly increases the grammatical structure of agrammatic speech while providing no linguistic information, functioning merely to reduce on-line processing demands. Copyright 2000 Academic Press and Unisys Corporation.  相似文献   

3.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

4.
5.
Infant-directed maternal speech is an important component of infants’ linguistic input. However, speech from other speakers and speech directed to others constitute a large amount of the linguistic environment. What are the properties of infant-directed speech that differentiate it from other components of infants’ speech environment? To what extent should these other aspects be considered as part of the linguistic input? This review examines the characteristics of the speech input to preverbal infants, including phonological, morphological, and syntactic characteristics, specifically how these properties might support language development. While maternal, infant-directed speech is privileged in the input, other aspects of the environment, such as adult-directed speech, may also play a role. Furthermore, the input is variable in nature, dependent on the age and linguistic development of the infant, the social context, and the interaction between the infant and speakers in the environment.  相似文献   

6.
Chaika (1974, Brain & Language, 1, 257–276) raised the possibility that speech judged “schizophrenic” by researchers results from an intermittent, cyclical aphasia. Fromkin (1975, Brain and Language, 2, 498–503) judged such speech as no different from normal error and even as proof of intactness of linguistic competence. These claims are examined. Lecours and Vanier-Clément (1976, Brain and Language, 3, 516–565), on the other hand, found it not normal, but different from jargonaphasia, a claim not wholly substantiated. Schizophrenic speech is found to be different from that of normals on both formal and intuitive grounds, contributing to new understanding of the differences between normal, pathological, and artistic language.  相似文献   

7.
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language‐specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross‐linguistic viability of different statistical learning strategies by analyzing child‐directed speech corpora from nine languages and by modeling possible statistics‐based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non‐statistical cues when they begin their process of speech segmentation.  相似文献   

8.
Speaking fundamental frequency (SFF), the average fundamental frequency (lowest frequency of a complex periodic sound) measured over the speaking time of a vocal or speech task, is a basic acoustic measure in clinical evaluation and treatment of voice disorders. Currently, there are few data on acoustic characteristics of different sociolinguistic groups, and no published data on the fundamental frequency characteristics of Arabic speech. The purpose of this study was to obtain preliminary data on the SFF characteristics of a group of normal speaking, young Arabic men. 15 native Arabic men (M age = 23.5 yr., SD=2.5) as participants received identical experimental treatment. Four speech samples were collected from each one, Arabic reading, Arabic spontaneous speech, English reading, and English spontaneous speech. Speaking samples, analyzed using the Computerized Speech Lab, showed no significant difference for mean SFF between language and type of speech and none for mean SFF between languages. A significant difference in the mean SFF was found between the types of speech. The SFF used during reading was significantly higher than that for spontaneous speech. Also Arabic men had higher SFF values than those previously reported for young men in other linguistic groups. SFF then might differ among linguistic, dialectical, and social groups and such data may provide clinicians information useful in evaluation and management of voice.  相似文献   

9.
Bilingual and monolingual infants differ in how they process linguistic aspects of the speech signal. But do they also differ in how they process non‐linguistic aspects of speech, such as who is talking? Here, we addressed this question by testing Canadian monolingual and bilingual 9‐month‐olds on their ability to learn to identify native Spanish‐speaking females in a face‐voice matching task. Importantly, neither group was familiar with Spanish prior to participating in the study. In line with our predictions, bilinguals succeeded in learning the face‐voice pairings, whereas monolinguals did not. We consider multiple explanations for this finding, including the possibility that simultaneous bilingualism enhances perceptual attentiveness to talker‐specific speech cues in infancy (even in unfamiliar languages), and that early bilingualism delays perceptual narrowing to language‐specific talker recognition cues. This work represents the first evidence that multilingualism in infancy affects the processing of non‐linguistic aspects of the speech signal, such as talker identity.  相似文献   

10.
This article attempts to sketch out a view of language as a relational–linguistic spiral by discussing some implications of the thought of Ludwig Wittgenstein for language in general. Language is cast as a spiral which revolves around a center of 'human relationality' that anchors all our speech and concepts but which revolves in an ever–widening way into an arena of meaning we call language. Language creates linguistic space for experience and invites one into these new experiences. The borders of our language are thus not the absolute limits of our world but the admitted limits of our experience. Because the enterprise of language is inherently open, there must be a space for theological language and for the possibility at least of the kind of experiences described therein. Tracing the relational 'vectors' involved in language can also provide a platform for theological and even inter–religious dialogue.  相似文献   

11.
The underlying structures that are common to the world's languages bear an intriguing connection with early emerging forms of “core knowledge” (Spelke & Kinzler, 2007), which are frequently studied by infant researchers. In particular, grammatical systems often incorporate distinctions (e.g., the mass/count distinction) that reflect those made in core knowledge (e.g., the non‐verbal distinction between an object and a substance). Here, I argue that this connection occurs because non‐verbal core knowledge systematically biases processes of language evolution. This account potentially explains a wide range of cross‐linguistic grammatical phenomena that currently lack an adequate explanation. Second, I suggest that developmental researchers and cognitive scientists interested in (non‐verbal) knowledge representation can exploit this connection to language by using observations about cross‐linguistic grammatical tendencies to inspire hypotheses about core knowledge.  相似文献   

12.
13.
14.
Three social-interaction behaviors of a withdrawn, chronic schizophrenic were increased using a discriminated avoidance (“nagging”) procedure. The three behaviors were: (a) voice volume loud enough so that two-thirds of his speech was intelligible at a distance of 3 m; (b) duration of speech of at least 15 sec; (c) placement of hands and elbows on the armrests of the chair in which he was sitting. “Nagging” consisted of verbal prompts to improve performance when the behaviors did not meet their criteria. A combined withdrawal and multiple-baseline design was used to evaluate the effectiveness of the procedure, and the contingency was sequentially applied to each of the three behaviors in each of four different interactions to determine the degree of stimulus and response generalization. Results indicated that the contingency was the effective element in increasing the patient's appropriate performance, and that there was a high degree of stimulus generalization and a moderate degree of response generalization. After the patient's discharge from the hospital, the durability of improvement across time and setting was determined in followup sessions conducted at a day treatment center and at a residential care home. Volume and duration generalized well to the new settings, while arm placement extinguished immediately.  相似文献   

15.
This article examines caregiver speech to young children. The authors obtained several measures of the speech used to children during early language development (14-30 months). For all measures, they found substantial variation across individuals and subgroups. Speech patterns vary with caregiver education, and the differences are maintained over time. While there are distinct levels of complexity for different caregivers, there is a common pattern of increase across age within the range that characterizes each educational group. Thus, caregiver speech exhibits both long-standing patterns of linguistic behavior and adjustment for the interlocutor. This information about the variability of speech by individual caregivers provides a framework for systematic study of the role of input in language acquisition.  相似文献   

16.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

17.
Is the observed link between musical ability and non‐native speech‐sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non‐native speech‐sound processing: Controlling for non‐verbal intelligence, prior foreign language‐learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non‐native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non‐native speech‐sound processing, complex tests of musical ability also tap into other shared mechanisms.  相似文献   

18.
Räsänen O 《Cognition》2011,(2):149-176
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this work, a computational model for word segmentation and learning of primitive lexical items from continuous speech is presented. The model does not utilize any a priori linguistic or phonemic knowledge such as phones, phonemes or articulatory gestures, but computes transitional probabilities between atomic acoustic events in order to detect recurring patterns in speech. Experiments with the model show that word segmentation is possible without any knowledge of linguistically relevant structures, and that the learned ungrounded word models show a relatively high selectivity towards specific words or frequently co-occurring combinations of short words.  相似文献   

19.
Linguistic encoding in short-term memory as a function of stimulus type   总被引:1,自引:0,他引:1  
In this study, we investigated bases for encoding linguistic stimuli in short-term memory. Past research has provided evidence for both phonological (sound-based) and cherological (sign-based) encoding, the former typically found with hearing subjects and the latter with deaf users of sign language. In the present experiment, encoding capabilities were delineated from encoding preferences, using 58 subjects comprising six groups differing in hearing ability and linguistic experience. Phonologically related, cherologically related, and control lists were presented orally, manually, or through both modalities simultaneously. Recall performance indicated that individuals encode flexibly, the code actually used being biased by incoming stimulus characteristics. Subjects with both sign and speech experience recalled simultaneous presentations better than ones presented orally or manually alone, which reveals the occurrence of enhanced encoding as a function of linguistic experience. Total linguistic experience appeared to determine recall accuracy following different types of encoding, rather than determining the encoding basis used.  相似文献   

20.
Unlike our primate cousins, many species of bird share with humans a capacity for vocal learning, a crucial factor in speech acquisition. There are striking behavioural, neural and genetic similarities between auditory-vocal learning in birds and human infants. Recently, the linguistic parallels between birdsong and spoken language have begun to be investigated. Although both birdsong and human language are hierarchically organized according to particular syntactic constraints, birdsong structure is best characterized as 'phonological syntax', resembling aspects of human sound structure. Crucially, birdsong lacks semantics and words. Formal language and linguistic analysis remains essential for the proper characterization of birdsong as a model system for human speech and language, and for the study of the brain and cognition evolution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号