Co-speech iconic gestures and visuo-spatial working memory |
| |
Authors: | Ying Choon Wu Seana Coulson |
| |
Affiliation: | 1. Center for Research in Language, UC San Diego 0526, 9500 Gilman Dr., La Jolla, CA 92093, USA;2. Swartz Center for Computational Neuroscience, UC San Diego 0559, 9500 Gilman Dr., La Jolla, CA 92093, USA;3. UC San Diego, Dept. of Cognitive Science 0515, 9500 Gilman Dr., La Jolla, CA 92093, USA |
| |
Abstract: | Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes. |
| |
Keywords: | 2343 Learning & Memory 2720 Linguistics & Language & Speech 2320 Sensory Perception |
本文献已被 ScienceDirect 等数据库收录! |
|