首页 | 本学科首页   官方微博 | 高级检索  
   检索      


When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures
Authors:Yi-Chuan Chen  Charles Spence
Institution:1. ID/ASD Research Group, Nisonger Center, University Center for Excellence in Developmental Disabilities, USA;2. Department of Psychology, The Ohio State University, USA;3. Department of Psychology, The Ohio State University Newark, USA;1. Laboratoire d''Étude des Mécanismes Cognitifs, University Lyon 2, 5 avenue Pierre-Mendès France, 69676 Bron Cedex, France;2. School of Psychology, Laval University, 2325 rue des Bibliothèques, Quebec City, Quebec G1V 0A6, Canada
Abstract:We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号