首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
In the course of language development children must solve arbitrary form-to-meaning mappings, in which semantic components are encoded onto linguistic labels. Because sign languages describe motion and location of entities through iconic movements and placement of the hands in space, child signers may find spatial semantics-to-language mapping easier to learn than child speakers. This hypothesis was tested in two studies: a longitudinal analysis of a native signing child's use of British Sign Language to describe motion and location events between the ages 1–10 and 3–0, and performance of 18 native signing children between the ages of 3–0 and 4–11 on a motion and location sentence comprehension task. The results from both studies argue against a developmental advantage for sign language learners for the acquisition of motion and location forms. Early forms point towards gesture and embodied actions followed by protracted mastery of the use of signs in representational space. The understanding of relative spatial relations continues to be difficult, despite the iconicity of these forms in the language, beyond 5 years of age.  相似文献   

2.
The recent emergence of a new sign language among deaf children and adolescents in Nicaragua provides an opportunity to study how grammatical features of a language arise and spread, and how new language environments are constructed. The grammatical regularities that underlie language use reside largely outside the domain of explicit awareness. Nevertheless, knowledge of these regularities must be transmitted from one generation to the next to survive as part of the language. During this transmission, language form and use is shaped by both the characteristics of ontogenetic development within individual users and by historical changes in patterns of interaction between users. To capture this process, the present study follows the emergence of spatial modulations in Nicaraguan Sign Language (NSL). A comprehension task examining interpretations of spatially modulated verbs reveals that new form-function mappings arise among children who functionally differentiate previously equivalent forms. The new mappings are then acquired by their age peers (who are also children), and by subsequent generations of children who learn the language, but not by adult contemporaries. As a result, language emergence is characterized by a convergence on form within each age cohort, and a mismatch in form from one age cohort to the cohort that follows. In this way, each age cohort, in sequence, transforms the language environment for the next, enabling each new cohort of learners to develop further than its predecessors.  相似文献   

3.
The present study investigates whether knowledge about the intentional relationship between gesture and speech influences controlled processes when integrating the two modalities at comprehension. Thirty-five adults watched short videos of gesture and speech that conveyed semantically congruous and incongruous information. In half of the videos, participants were told that the two modalities were intentionally coupled (i.e., produced by the same communicator), and in the other half, they were told that the two modalities were not intentionally coupled (i.e., produced by different communicators). When participants knew that the same communicator produced the speech and gesture, there was a larger bi-lateral frontal and central N400 effect to words that were semantically incongruous versus congruous with gesture. However, when participants knew that different communicators produced the speech and gesture--that is, when gesture and speech were not intentionally meant to go together--the N400 effect was present only in right-hemisphere frontal regions. The results demonstrate that pragmatic knowledge about the intentional relationship between gesture and speech modulates controlled neural processes during the integration of the two modalities.  相似文献   

4.
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a “frame” (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a “last item” belonging to one of four categories: a high-close-probability sign (a “semantically reasonable” completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a “semantically odd” completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity.  相似文献   

5.
Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the possibility that some linguistic principles are amodal and abstract.  相似文献   

6.
The mu rhythms (8–13 Hz) and the beta rhythms (15 up to 30 Hz) of the EEG are observed in the central electrodes (C3, Cz and C4) in resting states, and become suppressed when participants perform a manual action or when they observe another’s action. This has led researchers to consider that these rhythms are electrophysiological markers of the motor neuron activity in humans. This study tested whether the comprehension of action language, unlike abstract language, modulates mu and low beta rhythms (15–20 Hz) in a similar way as the observation of real actions. The log-ratios were calculated for each oscillatory band between each condition and baseline resting periods. The results indicated that both action language and action videos caused mu and beta suppression (negative log-ratios), whereas abstract language did not, confirming the hypothesis that understanding action language activates motor networks in the brain. In other words, the resonance of motor areas associated with action language is compatible with the embodiment approach to linguistic meaning.  相似文献   

7.
Flaherty M  Senghas A 《Cognition》2011,121(3):427-436
What abilities are entailed in being numerate? Certainly, one is the ability to hold the exact quantity of a set in mind, even as it changes, and even after its members can no longer be perceived. Is counting language necessary to track and reproduce exact quantities? Previous work with speakers of languages that lack number words involved participants only from non-numerate cultures. Deaf Nicaraguan adults all live in a richly numerate culture, but vary in counting ability, allowing us to experimentally differentiate the contribution of these two factors. Thirty deaf and 10 hearing participants performed 11 one-to-one matching and counting tasks. Results suggest that immersion in a numerate culture is not enough to make one fully numerate. A memorized sequence of number symbols is required, though even an unconventional, iconic system is sufficient. Additionally, we find that within a numerate culture, the ability to track precise quantities can be acquired in adulthood.  相似文献   

8.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

9.
Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities.  相似文献   

10.
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent “secondary” cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.  相似文献   

11.
Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.  相似文献   

12.
Echo phonology was originally proposed to account for obligatory coordination of manual and mouth articulations observed in several sign languages. However, previous research into the phenomenon lacks clear criteria for which components of movement can or must be copied when the articulators are so different. Nor is there discussion of which nonmanual articulators can echo manual movement. Given the prosodic properties of echoes (coordination of onset/offset and of dynamics such as speed) as well as general motoric coordination of various articulators in the human body, we expect that the mouth is not the only nonmanual articulator involved in echo phonology. In this study, we look at a fixed set of lexical items across 36 sign languages and establish that the head can echo manual movement with respect to timing and to the axis/axes of manual movement. We propose that what matters in echo phonology is the visual percept of temporally coordinated movement that repeats a salient movement property in such a way as to give the visual impression of a copy. Our findings suggest that echoes are not obligatory motor couplings of two or more articulators but may enhance phonological distinctions that are otherwise difficult to see.  相似文献   

13.
The received opinion is that symbol is an evolutionary prerequisite for syntax. This paper shows two things: 1) symbol is not a monolithic phenomenon, and 2) symbol and syntax must have co-evolved. I argue that full-blown syntax requires only three building blocks: signs, concatenation, grammar (constraints on concatenation). Functional dependencies between the blocks suggest the four-stage model of syntactic evolution, compatible with several earlier scenarios: (1) signs, (2) increased number of signs, (3) commutative concatenation of signs, (4) grammatical (noncommutative) concatenation of signs. The main claim of the paper is that symbolic reference comprises up to five distinct interpretative correlates: mental imagery, denotation, paradigmatic connotation, syntagmatic connotation, and definition. I show that the correlates form an evolutionary sequence, some stages of which can be aligned with certain stages of syntactic evolution.  相似文献   

14.
I discuss language forms as the primary means that language communities provide to enable public language use. As such, they are adapted to public use most notably in being linguistically significant vocal tract actions, not the categories in the mind as proposed in phonological theories. Their primary function is to serve as vehicles for production of syntactically structured sequences of words. However, more than that, phonological actions themselves do work in public language use. In particular, they foster interpersonal coordination in social activities. An intriguing property of language forms that likely reflects their emergence in social communicative activities is that phonological forms that should be meaningless (in order to serve their role in the openness of language at the level of the lexicon) are not wholly meaningless. In fact, the form-meaning “rift” is bridged bidirectionally: The smallest language forms are meaningful, and the meanings of lexical language forms generally inhere, in part, in their embodiment by understanders.  相似文献   

15.
16.
In this article we discuss the notion of a linguistic universal, and possible sources of such invariant properties of natural languages. In the first part, we explore the conceptual issues that arise. In the second part of the paper, we focus on the explanatory potential of horizontal evolution. We particularly focus on two case studies, concerning Zipf’s Law and universal properties of color terms, respectively. We show how computer simulations can be employed to study the large scale, emergent, consequences of psychologically and psychologically motivated assumptions about the working of horizontal language transmission.  相似文献   

17.
The "ba, ba, ba" sound universal to babies' babbling around 7 months captures scientific attention because it provides insights into the mechanisms underlying language acquisition and vestiges of its evolutionary origins. Yet the prevailing mystery is what is the biological basis of babbling, with one hypothesis being that it is a non-linguistic motoric activity driven largely by the baby's emerging control over the mouth and jaw, and another being that it is a linguistic activity reflecting the babies' early sensitivity to specific phonetic-syllabic patterns. Two groups of hearing babies were studied over time (ages 6, 10, and 12 months), equal in all developmental respects except for the modality of language input (mouth versus hand): three hearing babies acquiring spoken language (group 1: "speech-exposed") and a rare group of three hearing babies acquiring sign language only, not speech (group 2: "sign-exposed"). Despite this latter group's exposure to sign, the motoric hypothesis would predict similar hand activity to that seen in speech-exposed hearing babies because language acquisition in sign-exposed babies does not involve the mouth. Using innovative quantitative Optotrak 3-D motion-tracking technology, applied here for the first time to study infant language acquisition, we obtained physical measurements similar to a speech spectrogram, but for the hands. Here we discovered that the specific rhythmic frequencies of the hands of the sign-exposed hearing babies differed depending on whether they were producing linguistic activity, which they produced at a low frequency of approximately 1 Hz, versus non-linguistic activity, which they produced at a higher frequency of approximately 2.5 Hz - the identical class of hand activity that the speech-exposed hearing babies produced nearly exclusively. Surprisingly, without benefit of the mouth, hearing sign-exposed babies alone babbled systematically on their hands. We conclude that babbling is fundamentally a linguistic activity and explain why the differentiation between linguistic and non-linguistic hand activity in a single manual modality (one distinct from the human mouth) could only have resulted if all babies are born with a sensitivity to specific rhythmic patterns at the heart of human language and the capacity to use them.  相似文献   

18.
We report the results of an experiment investigating the ramifications of using space to express coreference in American Sign Language (ASL). Nominals in ASL can be associated with locations in signing space, and pronouns are directed toward those locations to convey coreference. A probe recognition technique was used to investigate the case of "locus doubling" in which a single referent is associated with two distinct spatial locations. The experiment explored whether an ASL pronoun activates both its antecedent referent and the location associated with that referent. An introductory discourse associated a referent (e.g, MOTHER) with two distinct locations (eg., STOREleft, KITCHENright), and a continuation sentence followed that either contained a pronoun referring to the referent in one location or contained no anaphora (the control sentence). Twenty-four deaf participants made lexical decisions to probe signs presented during the continuation sentences. The probe signs were either the referent of the pronoun, the referent-location determined by the pronoun, or the most recently mentioned location (not referenced by the pronoun). The results indicated that response times to referent nouns were faster in the pronoun than in the no-pronoun control condition and that response times to the location signs did not differ across conditions. Thus, the spatial nature of coreference in ASL does not alter the processing mechanism underlying the on-line interpretation of pronouns. Pronouns activate only referent nouns, not spatial location nouns associated with the referent.  相似文献   

19.
Abstract

Evidence over the last 15 years has suggested that dual (imagery and verbal) coding explanations of concreteness effects in memory for word lists do not generalise well to memory for sentences and paragraphs. In contrast, an alternative framework based on relative differences in relational and distinctive processing has been shown to account for the effects of imagery and concreteness in these contexts and others. This paper describes recent research on free and cued recall of word lists and evaluates it with respect to the two models. The evidence suggests that whereas dual processing systems may be involved in the encoding of verbal materials, dual memory codes are insufficient to explain concreteness effects in recall. Better memory for high-as compared to low-imagery words depends on the use of paradigms that facilitate inter-item relational processing, independent of whether or not imagery is involved.  相似文献   

20.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号