首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory component P2. The same word tokens produced no ERP differences when participants listened to the discourse without view of the speaker. We conclude that beat gestures are integrated with speech early on in time and modulate sensory/phonological levels of processing. The present results support the possible role of beats as a highlighter, helping the listener to direct the focus of attention to important information and modulate the parsing of the speech stream.  相似文献   

2.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

3.
In 5 experiments, male and female undergraduates viewed gestures and tried to select the words that originally accompanied them; read interpretations of gestures' meanings and tried to select the words that originally had accompanied them; tried to recognize gestures they previously had seen, presented either with or without the accompanying speech; and assigned gestures and the accompanying speech to semantic categories. On all 4 tasks, performance was better than chance but markedly inferior to performance when words were used as stimuli. Judgments of a gesture's semantic category were determined principally by the accompanying speech rather than gestural form. It is concluded that although gestures can convey some information, they are not richly informative, and the information they convey is largely redundant with speech.  相似文献   

4.
Wu YC  Coulson S 《Brain and language》2007,101(3):234-245
EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with information in the speech, but not the gesture; and two sorts of unrelated probes were created by pairing each related probe with a different discourse prime. Event-related potentials (ERPs) elicited by picture probes were measured within the time windows of the N300 (250-350 ms post-stimulus) and N400 (350-550 ms post-stimulus). Cross-modal related probes elicited smaller N300 and N400 than speech-only related ones, indicating that pictures were easier to interpret when they corresponded with gestures. N300 and N400 effects were not due to differences in the visual complexity of each probe type, since the same cross-modal and speech-only picture probes elicited N300 and N400 with similar amplitudes when they appeared as unrelated items. These findings extend previous research on gesture comprehension by revealing how iconic co-speech gestures modulate conceptualization, enabling listeners to better represent visuo-spatial aspects of the speaker's meaning.  相似文献   

5.
Speech-associated gestures, Broca's area, and the human mirror system   总被引:3,自引:0,他引:3  
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.  相似文献   

6.
Recent studies in the psychological literature reveal that cospeech gestures facilitate the construction of an articulated mental model of an oral discourse by hearing individuals. In particular, they facilitate correct recollections and discourse-based inferences at the expense of memory for discourse verbatim. Do gestures accompanying an oral discourse facilitate the construction of a discourse model also by oral deaf individuals trained to lip-read? The atypical cognitive functioning of oral deaf individuals leads to this prediction. Experiments 1 and 2, each conducted on 16 oral deaf individuals, used a recollection task and confirmed the prediction. Experiment 3, conducted on 36 oral deaf individuals, confirmed the prediction using a recognition task.  相似文献   

7.
The design of effective communications depends upon an adequate model of the communication process. The traditional model is that speech conveys semantic information and bodily movement conveys information about emotion and interpersonal attitudes. But McNeill (2000) argues that this model is fundamentally wrong and that some bodily movements, namely spontaneous hand movements generated during talk (iconic gestures), are integral to semantic communication. But can we increase the effectiveness of communication using this new theory? Focusing on advertising we found that advertisements in which the message was split between speech and iconic gesture (possible on TV) were significantly more effective than advertisements in which meaning resided purely in speech or language (radio/ newspaper). We also found that the significant differences in communicative effectiveness were maintained across five consecutive trials. We compared the communicative power of professionally made TV advertisements in which a spoken message was accompanied either by iconic gestures or by pictorial images, and found the iconic gestures to be more effective. We hypothesized that iconic gestures are so effective because they illustrate and isolate just the core semantic properties of a product. This research suggests that TV advertisements can be made more effective by incorporating iconic gestures with exactly the right temporal and semantic properties.  相似文献   

8.
Speech-associated gestures, Broca’s area, and the human mirror system   总被引:1,自引:1,他引:0  
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.  相似文献   

9.
10.
Abstract

A video-taped model presented subjects with sets of sentences to be free-recalled. under three presentation conditions: (1) accompanied by pantomimic gestures; (2) accompanied by non-pantomimic gestures; and (3) no gestures present. When the sentences formed a narrative, the gestures did not affect recall. When the sentences were unconnected, recall was higher for the gestured than for tbe non-gestured sentences. The pantomimic and non-pantomimic gestures showed about the same mnemonic effect. The subjects were given a second test, either recall-cued by the gestures, or else free recall of the gestured Sentences only. The pantomimic conditions were superior to the non-pantomimic conditions in both these tests. Possible explanations for the mnemonic effects of the gestures are discussed.  相似文献   

11.
Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.  相似文献   

12.
Self-touching gestures can be externally induced by the verbal presentation of anxiety-inducing stimuli and the active discussion of a passage. The frequency of these self-touching gestures appears to be affected by the individual interacting with the topic, the type of discourse (listening or discussing), the type of stimulus (canaries or leeches), and the interaction between the types of discourse and stimulus. This study assessed these variables as well as the sex of the participant and the order of presentation of stimulus type, neither of which were statistically significant. Participants were read two passages, one about a topic (leeches) expected to produce anxiety and the other about a topic (canaries) not expected to do so, and asked to answer questions about the passages. The number of self-touches was counted by an observer in another room. Each participant had both types of discourse (listening and discussing) and both types of stimulus (canaries and leeches). There was no significant difference between the number of self-touches by participants with either the male or female reader. Discussion as a method of discourse was associated with a significantly greater number of self-touches than listening. The interaction between discourse type and stimulus type was also significant. The combination of the anxiety-producing stimulus and the active discourse (discussion) produced the highest average number of self-touches.  相似文献   

13.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

14.
This research investigated how teachers express links between ideas in speech, gestures, and other modalities in middle school mathematics instruction. We videotaped 18 lessons (3 from each of 6 teachers), and within each, we identified linking episodes—segments of discourse in which the teacher connected mathematical ideas. For each link, we identified the modalities teachers used to express linked ideas and coded whether the content was new or review. Teachers communicated most links multimodally, typically using speech and gestures. Teachers’ gestures included depictive gestures that simulated actions and perceptual states, and pointing gestures that grounded utterances in the physical environment. Compared to links about new material, teachers were less likely to express links about review material multimodally, especially when that material had been mentioned previously. Moreover, teachers gestured at a higher rate in links about new material. Gestures are an integral part of teachers’ communication during mathematics instruction.  相似文献   

15.
Lexical production in children with Down syndrome (DS) was investigated by examining spoken naming accuracy and the use of spontaneous gestures in a picture naming task. Fifteen children with DS (range 3.8-8.3 years) were compared to typically developing children (TD), matched for chronological age and developmental age (range 2.6-4.3 years). Relative to TD children, children with DS were less accurate in speech (producing a greater number of unintelligible answers), yet they produced more gestures overall and of these a significantly higher percentage of iconic gestures. Furthermore, the iconic gestures produced by children with DS accompanied by incorrect or no speech often expressed a concept similar to that of the target word, suggesting deeper conceptual knowledge relative to that expressed only in speech.  相似文献   

16.
This article describes the distribution and development of handedness for manual gestures in captive chimpanzees. Data on handedness for unimanual gestures were collected in a sample of 227 captive chimpanzees. Handedness for these gestures was compared with handedness for three other measures of hand use: tool use, reaching, and coordinated bimanual actions. Chimpanzees were significantly more right-handed for gestures than for all other measures of hand use. Hand use for simple reaching at 3 to 4 years of age predicted hand use for gestures 10 years later. Use of the right hand for gestures was significantly higher when gestures were accompanied by a vocalization than when they were not. The collective results suggest that left-hemisphere specialization for language may have evolved initially from asymmetries in manual gestures in the common ancestor of chimpanzees and humans, rather than from hand use associated with other, non-communicative motor actions, including tool use and coordinated bimanual actions, as has been previously suggested in the literature.  相似文献   

17.
Visible embodiment: Gestures as simulated action   总被引:1,自引:0,他引:1  
Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.  相似文献   

18.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

19.
In order to assess the impact of verbal and nonverbal information on pragmatic response, 16 children aged 15 to 24 were asked questions that could take either informational or action responses. Conventionalization of linguistic form, gestural accompaniments, and preceding discourse were systematically varied. Children responded in pragmatically appropriate manner to conventionalized forms. The pragmatic function of the discourse preceding nonconventionalized questions had no effect on children's responses, but gestures affected all categories of response except simple action responses. Older children gave more simultaneous integrative responses than did younger children. Results indicate an increasing ability to coordinate linguistic and nonlinguistic sources of information, but little tendency to integrate across successively presented information.This research was supported by a doctoral fellowship from the Social Sciences and Humanities Research Council of Canada and is based on the author's doctoral dissertation.  相似文献   

20.
Recent research shows that co-speech gestures can influence gesturers’ thought. This line of research suggests that the influence of gestures is so strong, that it can wash out and reverse an effect of learning. We argue that these findings need a more robust and ecologically valid test, which we provide in this article. Our results support the claim that gestures not only reflect information in our mental representations, but can also influence gesturer's thought by adding action information to one's mental representation during problem solving (Tower of Hanoi). We show, however, that the effect of gestures on subsequent performance is not as strong as previously suggested. As opposed to what previous research indicates, gestures' facilitative effect through learning was not nullified by the potentially interfering effect on subsequent problem-solving performance of incompatible gestures. To conclude, using gestures during problem solving seems to provide more benefits than costs for task performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号