Integrating conceptual knowledge within and across representational modalities |
| |
Authors: | McNorgan Chris Reid Jackie McRae Ken |
| |
Affiliation: | University of Western Ontario, London, Canada |
| |
Abstract: | Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy. |
| |
Keywords: | Semantic memory Multimodal representations Binding problem Embodied cognition |
本文献已被 ScienceDirect PubMed 等数据库收录! |