首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

2.
Bub DN  Masson ME  Cree GS 《Cognition》2008,106(1):27-58
We distinguish between grasping gestures associated with using an object for its intended purpose (functional) and those used to pick up an object (volumetric) and we develop a novel experimental framework to show that both kinds of knowledge are automatically evoked by objects and by words denoting those objects. Cued gestures were carried out in the context of depicted objects or visual words. On incongruent trials, the cued gesture was not compatible with gestures typically associated with the contextual item. On congruent trials, the gesture was compatible with the item's functional or volumetric gesture. For both gesture types, response latency was longer for incongruent trials indicating that objects and words elicited both functional and volumetric manipulation knowledge. Additional evidence, however, clearly supports a distinction between these two kinds of gestural knowledge. Under certain task conditions, functional gestures can be evoked without the associated activation of volumetric gestures. We discuss the implication of these results for theories of action evoked by objects and words, and for interpretation of functional imaging results.  相似文献   

3.
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent “secondary” cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.  相似文献   

4.
In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while participants viewed video clips of an actor uttering metaphorical expressions and producing bodily gestures that were congruent or incongruent with the metaphorical meaning of such expressions. This modality of stimuli presentation allows a more ecological approach to meaning integration. When ERPs were calculated using gesture stroke as time-lock event, gesture incongruity with metaphorical expression modulated the amplitude of the N400 and of the late positive complex (LPC). This suggests that gestural and speech information are combined online to make sense of the interlocutor’s linguistic production in an early stage of metaphor comprehension. Our data favor the idea that meaning construction is globally integrative and highly context-sensitive.  相似文献   

5.
Recognising a facial expression is more difficult when the expresser's body conveys incongruent affect. Existing research has documented such interference for universally recognisable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N?=?62) viewed videos of people simultaneously producing facial expressions and hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesised that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N?=?127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but does not suggest that perceivers rely more on gestures when sensorimotor face processing is disrupted.  相似文献   

6.
Wu YC  Coulson S 《Brain and language》2011,119(3):184-195
Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent and incongruent contexts. Gestures were presented either dynamically in short, soundless video-clips, or statically as freeze frames extracted from gesture videos. In a separate ERP experiment, the same participants viewed related or unrelated pairs of photographs depicting common real-world objects. Both object photos and gesture stimuli elicited less negative ERPs from 400 to 600 ms post-stimulus when preceded by matching versus mismatching contexts (dN450). Object photos and static gesture stills also elicited less negative ERPs between 300 and 400 ms post-stimulus (dN300). Findings demonstrate commonalities between the conceptual integration processes underlying the interpretation of iconic gestures and other types of image-based representations of the visual world.  相似文献   

7.
The role of color diagnosticity in object recognition and representation was assessed in three Experiments. In Experiment 1a, participants named pictured objects that were strongly associated with a particular color (e.g., pumpkin and orange). Stimuli were presented in a congruent color, incongruent color, or grayscale. Results indicated that congruent color facilitated naming time, incongruent color impeded naming time, and naming times for grayscale items were situated between the congruent and incongruent conditions. Experiment 1b replicated Experiment 1a using a verification task. Experiment 2 employed a picture rebus paradigm in which participants read sentences one word at a time that included pictures of color diagnostic objects (i.e., pictures were substituted for critical nouns). Results indicated that the “reading” times of these pictures mirrored the pattern found in Experiment 1. In Experiment 3, an attempt was made to override color diagnosticity using linguistic context (e.g., a pumpkin was described as painted green). Linguistic context did not override color diagnosticity. Collectively, the results demonstrate that color information is regularly utilized in object recognition and representation for highly color diagnostic items.  相似文献   

8.
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.  相似文献   

9.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

10.
Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.  相似文献   

11.
Three experiments are reported examining the effects of surface colour and brightness/texture gradients (photographic detail) on object classification and naming. Objects were drawn from classes with either structurally similar or structurally dissimilar exemplars. In Experiment 1a, object naming was facilitated by both congruent surface colour and photographic detail, with the effects of these two variables combining under-additively. In addition incongruent colour disrupted naming accuracy. These effects tended to be larger on objects from structurally similar classes than on objects from structurally dissimilar classes. Experiment 1b examined superordinate classification. There were again advantages due to congruent colour and photographic detail on responses to objects from both structurally similar and structurally dissimilar classes. Incongruent colour disrupted classification accuracy on structurally distinct but not structurally similar items. For structurally similar items, the advantages of congruent surface attributes on classification were smaller than on naming, but this was not the case for structurally dissimilar items. Experiment 2 examined subordinate classification of structurally similar objects. Now effects of congruent and incongruent colour, but not of photographic detail, were found. Experiment 3 showed that congruent and incongruent colour effects occur only when the colours occupy the internal surfaces of objects. The results suggest that surface details can affect object recognition and naming, depending upon: (1) the degree to which objects must be differentiated for a correct response to be made, and (2) the nature of the rate-limiting process determining performance.  相似文献   

12.
The effects of prohibiting gestures on children's lexical retrieval ability   总被引:1,自引:0,他引:1  
Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996) sees gestures as functioning more at the level of speech production in helping the speaker to find the right words. The latter hypothesis has not been fully explored with children. In this study children were given a naming task under conditions that allowed and restricted gestures. Children named more words correctly and resolved more 'tip-of-the-tongue' states when allowed to gesture than when not, suggesting that gestures facilitate access to the lexicon in children and are important for speech production as well as conceptualization.  相似文献   

13.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

14.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

15.
Do the gestures that speakers produce while talking significantly benefit listeners' comprehension of the message? This question has been the topic of many research studies over the previous 35 years, and there has been little consensus. The present meta-analysis examined the effect sizes from 63 samples in which listeners' understanding of a message was compared when speech was presented alone with when speech was presented with gestures. It was found that across samples, gestures do provide a significant, moderate benefit to communication. Furthermore, the magnitude of this effect is moderated by 3 factors. First, effects of gesture differ as a function of gesture topic, such that gestures that depict motor actions are more communicative than those that depict abstract topics. Second, effects of gesture on communication are larger when the gestures are not completely redundant with the accompanying speech; effects are smaller when there is more overlap between the information conveyed in the 2 modalities. Third, the size of the effect of gesture is dependent on the age of the listeners, such that children benefit more from gestures than do adults. Remaining questions for future research are highlighted.  相似文献   

16.
Wu YC  Coulson S 《Brain and language》2007,101(3):234-245
EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with information in the speech, but not the gesture; and two sorts of unrelated probes were created by pairing each related probe with a different discourse prime. Event-related potentials (ERPs) elicited by picture probes were measured within the time windows of the N300 (250-350 ms post-stimulus) and N400 (350-550 ms post-stimulus). Cross-modal related probes elicited smaller N300 and N400 than speech-only related ones, indicating that pictures were easier to interpret when they corresponded with gestures. N300 and N400 effects were not due to differences in the visual complexity of each probe type, since the same cross-modal and speech-only picture probes elicited N300 and N400 with similar amplitudes when they appeared as unrelated items. These findings extend previous research on gesture comprehension by revealing how iconic co-speech gestures modulate conceptualization, enabling listeners to better represent visuo-spatial aspects of the speaker's meaning.  相似文献   

17.
Memory for series of action phrases improves in listeners when speakers accompany each phrase with congruent gestures compared to when speakers stay still. Studies reveal that the listeners’ motor system, at encoding, plays a crucial role in this enactment effect. We present two experiments on gesture observation, which explored the role of the listeners’ motor system at recall. The participants listened to the phrases uttered by a speaker in two conditions in each experiment. In the gesture condition, the speaker uttered the phrases with accompanying congruent gestures, and in the no-gesture condition, the speaker stayed still while uttering the phrases. The participants were then invited, in both conditions of the experiments, to perform a motor task while recalling the phrases proffered by the speaker. The results revealed that the advantage of observing gestures on memory disappears if the listeners move at recall arms and hands (same motor effectors moved by the speaker, Experiment 1a), but not when the listeners move legs and feet (different motor effectors from those moved by the speaker, Experiment 1b). The results suggest that the listeners’ motor system is involved not only during the encoding of action phrases uttered by a speaker but also when recalling these phrases during retrieval.  相似文献   

18.
We studied how gesture use changes with culture, age and increased spoken language competence. A picture-naming task was presented to British (N = 80) and Finnish (N = 41) typically developing children aged 2–5 years. British children were found to gesture more than Finnish children and, in both cultures, gesture production decreased after the age of two. Two-year-olds used more deictic than iconic gestures than older children, and gestured more before the onset of speech, rather than simultaneously or after speech. The British 3- and 5-year-olds gestured significantly more when naming praxic (manipulable) items than non-praxic items. Our results support the view that gesture serves a communicative and intrapersonal function, and the relative function may change with age. Speech and language therapists and psychologists observe the development of children’s gestures and make predictions on the basis of their frequency and type. To prevent drawing erroneous conclusions about children’s linguistic development, it is important to understand developmental and cultural variations in gesture use.  相似文献   

19.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

20.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号