首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

3.
Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.  相似文献   

4.
Intentional and attentional dynamics of speech-hand coordination   总被引:1,自引:0,他引:1  
Interest is rapidly growing in the hypothesis that natural language emerged from a more primitive set of linguistic acts based primarily on manual activity and hand gestures. Increasingly, researchers are investigating how hemispheric asymmetries are related to attentional and manual asymmetries (i.e., handedness). Both speech perception and production have origins in the dynamical generative movements of the vocal tract known as articulatory gestures. Thus, the notion of a "gesture" can be extended to both hand movements and speech articulation. The generative actions of the hands and vocal tract can therefore provide a basis for the (direct) perception of linguistic acts. Such gestures are best described using the methods of dynamical systems analysis since both perception and production can be described using the same commensurate language. Experiments were conducted using a phase transition paradigm to examine the coordination of speech-hand gestures in both left- and right-handed individuals. Results address coordination (in-phase vs. anti-phase), hand (left vs. right), lateralization (left vs. right hemisphere), focus of attention (speech vs. tapping), and how dynamical constraints provide a foundation for human communicative acts. Predictions from the asymmetric HKB equation confirm the attentional basis of functional asymmetry. Of significance is a new understanding of the role of perceived synchrony (p-centres) during intentional cases of gestural coordination.  相似文献   

5.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

6.
Visible embodiment: Gestures as simulated action   总被引:1,自引:0,他引:1  
Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.  相似文献   

7.
What features of brain processing and neural development support linguistic development in young children? To what extent is the profile and timing of linguistic development in young children determined by a pre-ordained genetic programme? Does the environment play a crucial role in determining the patterns of change observed in children growing up? Recent experimental, neuroimaging and computational studies of developmental change in children promise to contribute to a deeper understanding of how the brain becomes wired up for language. In this review, the muttidisciplinary perspectives of cognitive neuroscience, experimental psycholinguistics and neural network modelling are brought to bear on four distinct areas in the study of language acquisition: early speech perception, word recognition, word learning and the acquisition of grammatical inflections. It is suggested that each area demonstrates how linguistic development can be driven by the interaction of general learning mechanisms, highly sensitive to particular statistical regularities in the input, with a richly structured environment which provides the necessary ingredients for the emergence of linguistic representations that support mature language processing. Similar epigenetic principles, guiding the emergence of linguistic structure, apply to all these domains, offering insights into phenomena ranging from the precocity of young infant's sensitivity to speech contrasts to the complexities of the problem facing the young child learning the arabic plural.  相似文献   

8.
The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting 'speech perception' vary as a function of the task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as syllable discrimination or identification, only partially overlap those involved in speech perception as it occurs during natural language comprehension. In this review, we argue that cortical fields in the posterior-superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks that require access to the mental lexicon (i.e. accessing meaning-based representations) rely on auditory-to-meaning interface systems in the cortex in the vicinity of the left temporal-parietal-occipital junction. Tasks that require explicit access to speech segments rely on auditory-motor interface systems in the left frontal and parietal lobes. This auditory-motor interface system also appears to be recruited in phonological working memory.  相似文献   

9.
Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language‐learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one‐word period. We found that all 10 mothers ‘translated’ their children's gestures into words, providing timely models for how one‐ and two‐word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language‐learning.  相似文献   

10.
Time in the mind: using space to think about time   总被引:4,自引:0,他引:4  
Casasanto D  Boroditsky L 《Cognition》2008,106(2):579-593
How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people's more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.  相似文献   

11.
The paper reports on two experiments with the head turn preference method which provide evidence that already at 7 to 9 months, but not yet at 6 months, German‐learning infants do recognize unstressed closed‐class lexical elements in continuous speech. These findings support the view that even preverbal children are able to compute at least phonological representations for closed‐class functional elements. They also suggest that these elements must be available to the language learning mechanisms of the child from very early on, allowing the child to make use of the distributional properties of closed‐class lexical elements for further top‐down analysis of the linguistic input, e.g. segmentation and syntactic categorization.  相似文献   

12.
Embodied experience and linguistic meaning   总被引:5,自引:0,他引:5  
What role does people's embodied experiences have in their use and understanding of meaning? Most theories in cognitive science view meaning in terms of propositional structures that may be combined to form higher-order complexes in representing the meanings of conversations and texts. A newer approach seeks to capture meaning in terms of high-dimensional semantic space. Both views reflect the idea that meaning is best understood as abstract and disembodied symbols. My aim in this article is to make the case for an embodied view of linguistic meaning. This view provides a challenge to traditional approaches to linguistic meaning (although may not necessarily be entirely incompatible with them). I discuss several new lines of research from both linguistics and psychology that explore the importance of embodied perception and action in people's understanding of words, phrases, and texts. These data provide strong evidence in favor of the idea that significant aspects of thought and language arises from, and is grounded in, embodiment.  相似文献   

13.
李恒 《心理科学》2016,39(5):1080-1085
早期有关时空隐喻表征心理现实性的研究因囿于有声语言而饱受批评,近年来兴起的手势和手语研究为该问题的证明提供了新的视角和证据。一方面,手势在三个空间维度上均可形成时空隐喻,有力地回应了反对派对概念隐喻理论在语言和概念层面循环论证的质疑;另一方面,手语空间运用的独特性以及文化图式的复杂性,导致其时空隐喻表现形式更为多样,为该领域研究提供了更加丰富的类型学证据。未来研究还应当注意心理学、语言学以及民族学等多学科的交汇融合,建立起概括力更强和系统性更完整的理论框架,将口语、手势和手语同时囊括其中。  相似文献   

14.
Language can be understood as an embodied system, expressible as gestures. Perception of these gestures depends on the “mirror system,” first discovered in monkeys, in which the same neural elements respond both when the animal makes a movement and when it perceives the same movement made by others. This system allows gestures to be understood in terms of how they are produced, as in the so-called motor theory of speech perception. I argue that human speech evolved from manual gestures, with vocal gestures being gradually incorporated into the mirror system in the course of hominin evolution. Speech may have become the dominant mode only with the emergence of Homo sapiens some 170,100 years ago, although language as a relatively complex syntactic system probably emerged over the past 2 million years, initially as a predominantly manual system. Despite the present-day dominance of speech, manual gestures accompany speech, and visuomanual forms of language persist in signed languages of the deaf, in handwriting, and even in such forms as texting.  相似文献   

15.
When children learn language, they apply their language-learning skills to the linguistic input they receive. But what happens if children are not exposed to input from a conventional language? Do they engage their language-learning skills nonetheless, applying them to whatever unconventional input they have? We address this question by examining gesture systems created by four American and four Chinese deaf children. The children's profound hearing losses prevented them from learning spoken language, and their hearing parents had not exposed them to sign language. Nevertheless, the children in both cultures invented gesture systems that were structured at the morphological/word level. Interestingly, the differences between the children's systems were no bigger across cultures than within cultures. The children's morphemes could not be traced to their hearing mothers' gestures; however, they were built out of forms and meanings shared with their mothers. The findings suggest that children construct morphological structure out of the input that is handed to them, even if that input is not linguistic in form.  相似文献   

16.
Embodied agents use bodily actions and environmental interventions to make the world a better place to think in. Where does language fit into this emerging picture of the embodied, ecologically efficient agent? One useful way to approach this question is to consider language itself as a cognition-enhancing animal-built structure. To take this perspective is to view language as a kind of self-constructed cognitive niche: a persisting but never stationary material scaffolding whose crucial role in promoting thought and reason remains surprisingly poorly understood. It is the very materiality of this linguistic scaffolding, I suggest, that gives it some key benefits. By materializing thought in words, we create structures that are themselves proper objects of perception, manipulation, and (further) thought.  相似文献   

17.
Coarticulatory acoustic variation is presumed to be caused by temporally overlapping linguistically significant gestures of the vocal tract. The complex acoustic consequences of such gestures can be hypothesized to specify them without recourse to context-sensitive representations of phonetic segments. When the consequences of separate gestures converge on a common acoustic dimension (e.g., fundamental frequency), perceptual parsing of the acoustic consequences of overlapping spoken gestures, rather than associations of acoustic features, is required to resolve the distinct gestural events. Direct tests of this theory were conducted. These tests revealed mutual influences of (1) fundamental frequency during a vowel on prior consonant perception, and (2) consonant identity on following vowel stress and pitch perception. The results of these converging tests lead to the conclusion that speech perception involves a process in which acoustic information for coarticulated gestures is parsed from the stream of speech.  相似文献   

18.
This study investigated whether the quality and specification of phonological representations in early language development would predict later skilled reading. Two perceptual identification experiments were performed with skilled readers. In Experiment 1, spelling difficulties in Grade 1 were used as a proxy measure for poorly specified representations in early language development. In Experiment 2, difficulties in perceiving and representing liquid and nasalized phonemes in final consonant clusters were used for the same purpose. Both experiments showed that words that were more likely to develop underspecified lexical representations in early language development remained more difficult in skilled reading. This finding suggests that early linguistic difficulties in speech perception and structuring of lexical representations may constrain the long-term organization and dynamics of the skilled adult reading system. The present data thus challenge the assumption that skilled reading can be fully understood without taking into account linguistic constraints acting upon the beginning reader.  相似文献   

19.
We studied how gesture use changes with culture, age and increased spoken language competence. A picture-naming task was presented to British (N = 80) and Finnish (N = 41) typically developing children aged 2–5 years. British children were found to gesture more than Finnish children and, in both cultures, gesture production decreased after the age of two. Two-year-olds used more deictic than iconic gestures than older children, and gestured more before the onset of speech, rather than simultaneously or after speech. The British 3- and 5-year-olds gestured significantly more when naming praxic (manipulable) items than non-praxic items. Our results support the view that gesture serves a communicative and intrapersonal function, and the relative function may change with age. Speech and language therapists and psychologists observe the development of children’s gestures and make predictions on the basis of their frequency and type. To prevent drawing erroneous conclusions about children’s linguistic development, it is important to understand developmental and cultural variations in gesture use.  相似文献   

20.
People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号