首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research on signed languages offers the opportunity to address many important questions about language that it may not be possible to address via studies of spoken languages alone. Many such studies, however, are inherently limited, because there exist hardly any norms for lexical variables that have appeared to play important roles in spoken language processing. Here, we present a set of norms for age of acquisition, familiarity, and iconicity for 300 British Sign Language (BSL) signs, as rated by deaf signers, in the hope that they may prove useful to other researchers studying BSL and other signed languages. These norms may be downloaded from www.psychonomic.org/archive.  相似文献   

2.
To what extent is the neural organization of language dependent on factors specific to the modalities in which language is perceived and through which it is produced? That is, is the left-hemisphere dominance for language a function of a linguistic specialization or a function of some domain-general specialization(s), such as temporal processing or motor planning? Investigations of the neurobiology of signed language can help answer these questions. As with spoken languages, signed languages of the deaf display complex grammatical structure but are perceived and produced via radically different modalities. Thus, by mapping out the neurological similarities and differences between signed and spoken language, it is possible to identify modality-specific contributions to brain organization for language. Research to date has shown a significant degree of similarity in the neurobiology of signed and spoken languages, suggesting that the neural organization of language is largely modality-independent.  相似文献   

3.
Fundamental to infants' acquisition of their native language is an inherent interest in the language spoken around them over non-linguistic environmental sounds. The following studies explored whether the bias for linguistic signals in hearing infants is specific to speech, or reflects a general bias for all human language, spoken and signed. Results indicate that 6-month-old infants prefer an unfamiliar, visual-gestural language (American Sign Language) over non-linguistic pantomime, but 10-month-olds do not. These data provide evidence against a speech-specific bias in early infancy and provide insights into those properties of human languages that may underlie this language-general attentional bias.  相似文献   

4.
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent–child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.  相似文献   

5.
Intelligence has long been seen as linked to the spoken and written word. Because most deaf people have poor spoken language skills and find reading a significant challenge, there is a history in both psychology and education of considering deaf individuals to be less intelligent or less cognitively flexible than hearing individuals. With progress in understanding natural signed languages and cognitive abilities of individuals who lack spoken language, this perspective has changed. We now recognise, for example, that deaf people have some advantages in visuospatial ability relative to hearing people, and there is a link between the use of natural signed languages and enhanced visuospatial abilities in several domains. Such findings contrast with results found in memory, where the modality of mental representation, experience, and organisation of knowledge lead to differences in performance between deaf and hearing individuals, usually in favour of the latter. Such findings demonstrate that hearing loss and use of a natural sign language can influence intellectual abilities, including many tapped by standardised IQ tests. These findings raise interesting questions about the place of spoken language in our understanding of intelligence and ways in which we can use basic research for applied purposes.  相似文献   

6.
7.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

8.
Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.  相似文献   

9.
Does age constrain the outcome of all language acquisition equally regardless of whether the language is a first or second one? To test this hypothesis, the English grammatical abilities of deaf and hearing adults who either did or did not have linguistic experience (spoken or signed) during early childhood were investigated with two tasks, timed grammatical judgement and untimed sentence to picture matching. Findings showed that adults who acquired a language in early life performed at near-native levels on a second language regardless of whether they were hearing or deaf or whether the early language was spoken or signed. By contrast, deaf adults who experienced little or no accessible language in early life performed poorly. These results indicate that the onset of language acquisition in early human development dramatically alters the capacity to learn language throughout life, independent of the sensory-motor form of the early experience.  相似文献   

10.
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.  相似文献   

11.
The cognitive neuroscience of signed language   总被引:4,自引:0,他引:4  
The present article is an assessment of the current state of knowledge in the field of cognitive neuroscience of signed language. Reviewed lesion data show that the left hemisphere is dominant for perception and production of signed language in aphasics, in a fashion similar to spoken language aphasia. Several neuropsychological dissociations support this claim: Non-linguistic visuospatial functions can be dissociated from spatial functions and general motor deficits can be dissociated from execution of signs. Reviewed imaging data corroborate the lesion data in that the importance of the left hemisphere is re-confirmed. The data also establish the role of the right hemisphere in signed language processing. Alternative hypotheses regarding what aspects of signed language processing are handled by the right hemisphere are currently tested. The second section of the paper starts by addressing the role that early acquisition of signed and spoken language play for the neurofunctional activation patterns in the brain. Compensatory cognitive and communicative enhancements have also been documented as a function of early sign language use, suggesting an interesting interaction between language and cognition. Recent behavioural data on sign processing in working memory--a cognitive system important for language perception and production suggest e.g. phonological loop effects analogous to those obtained for speech processing. Neuroimaging studies will have to address this potential communality.  相似文献   

12.
Geraci C  Gozzi M  Papagno C  Cecchetto C 《Cognition》2008,106(2):780-804
It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.  相似文献   

13.
Language selection in bilingual speech: evidence for inhibitory processes   总被引:1,自引:0,他引:1  
Kroll JF  Bobb SC  Misra M  Guo T 《Acta psychologica》2008,128(3):416-430
Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., [Poulisse, N. (1999). Slips of the tongue: Speech errors in first and second language production. Amsterdam/Philadelphia: John Benjamins]). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper, we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the nontarget language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.  相似文献   

14.
We report a 27-year-old woman with chronic auditory agnosia following Landau-Kleffner Syndrome (LKS) diagnosed at age 4 1/2 . She grew up in the hearing/speaking community with some exposure to manually coded English and American Sign Language (ASL). Manually coded (signed) English is her preferred mode of communication. Comprehension and production of spoken language remain severely compromised. Disruptions in auditory processing can be observed in tests of pitch and duration, suggesting that her disorder is not specific to language. Linguistic analysis of signed, spoken, and written English indicates her language system is intact but compromised because of impoverished input during the critical period for acquisition of spoken phonology. Specifically, although her sign language phonology is intact, spoken language phonology is markedly impaired. We argue that deprivation of auditory input during a period critical for the development of a phonological grammar and auditory–verbal short-term memory has limited her lexical and syntactic development in specific ways.  相似文献   

15.
The brain basis of bilinguals’ ability to use two languages at the same time has been a hotly debated topic. On the one hand, behavioral research has suggested that bilingual dual language use involves complex and highly principled linguistic processes. On the other hand, brain-imaging research has revealed that bilingual language switching involves neural activations in brain areas dedicated to general executive functions not specific to language processing, such as general task maintenance. Here we address the involvement of language-specific versus cognitive-general brain mechanisms for bilingual language processing. We study a unique population, bimodal bilinguals proficient in signed and spoken languages, and we use an innovative brain-imaging technology, functional Near-Infrared Spectroscopy (fNIRS; Hitachi ETG-4000). Like fMRI, the fNIRS technology measures hemodynamic change, but it is also advanced in permitting movement for unconstrained speech and sign production. Participant groups included (i) hearing ASL–English bilinguals, (ii) ASL monolinguals, and (iii) English monolinguals. Imaging tasks included picture naming in “Monolingual mode” (using one language at a time) and in “Bilingual mode” (using both languages either simultaneously or in rapid alternation). Behavioral results revealed that accuracy was similar among groups and conditions. By contrast, neuroimaging results revealed that bilinguals in Bilingual mode showed greater signal intensity within posterior temporal regions (“Wernicke’s area”) than in Monolingual mode. Significance: Bilinguals’ ability to use two languages effortlessly and without confusion involves the use of language-specific posterior temporal brain regions. This research with both fNIRS and bimodal bilinguals sheds new light on the extent and variability of brain tissue that underlies language processing, and addresses the tantalizing questions of how language modality, sign and speech, impact language representation in the 7brain.  相似文献   

16.
Previous studies have shown that in the so-called opaque languages (those in which spelling does not correspond to pronunciation), there are relatively independent routes for lexical and nonlexical processing, that is, for words and nonwords, both in spoken and in written language. On the other hand, in the so-called transparent languages (those in which pronunciation corresponds to written forms), empirical evidence is scarcer. In this study of a neurological patient (parieto-temporal lesion), speaker of a transparent language (Spanish) showing a specific deficit in nonlexical reading processing, linguistic analysis for words was relatively preserved. This finding suggests the use of various routes in the processing of transparent languages.  相似文献   

17.
Larger communities face more communication barriers. We propose that languages spoken by larger communities adapt and overcome these greater barriers by increasing their reliance on sound symbolism, as sound symbolism can facilitate communication. To test whether widely spoken languages are more sound symbolic, participants listened to recordings of the words big and small in widely spoken and less common languages and guessed their meanings. Accuracy was higher for words from widely spoken languages providing evidence that widely spoken languages harbor more sound symbolism. Preliminary results also suggest that widely spoken languages rely on different sound symbolic patterns than less common languages. Community size can thus shape linguistic forms and influence the tools that languages use to facilitate communication.  相似文献   

18.
19.
The bilingual advantage—enhanced cognitive control relative to monolinguals—possibly occurs due to experience engaging general cognitive mechanisms in order to manage two languages. Supporting this hypothesis is evidence that bimodal (signed language–spoken language) bilinguals do not demonstrate such an advantage, presumably because the distinct language modalities reduce conflict and control demands. We hypothesized that the mechanism responsible for the bilingual advantage is the interplay between (a) the magnitude of bilingual management demands and (b) the amount of experience managing those demands. We recruited adult bimodal bilinguals with high bilingual management demands and examined cognitive control and working memory capacity longitudinally. After gaining experience managing high bilingual management demands, participants outperformed themselves from 2 years earlier on cognitive abilities associated with managing the bilingual demands. These results suggest that cognitive control outcomes for bilinguals vary as a function of the mechanisms recruited during bilingual management and the amount of experience managing the bilingual demands.  相似文献   

20.
In tonal languages, as Mandarin Chinese and Thai, word meaning is partially determined by lexical tones. Previous studies suggest that lexical tones are processed by native listeners as linguistic information and not as pure tonal information. This study aims at verifying if, in nontonal languages speakers, the discrimination of lexical Mandarin tones varies in function of the melodic ability. Forty-six students with no previous experience of Mandarin or any other tonal language were presented with two short lists of spoken monosyllabic Mandarin words and invited to perform a same–different task trying to identify whether the variation were phonological or tonal. Main results show that subjects perform significantly better in identifying phonological variations rather than tonal ones and interestingly, the group with a high melodic ability (assessed by Wing subtest 3) shows a better performance exclusively in detecting tonal variations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号