首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   1篇
  2022年   1篇
  2020年   2篇
  2019年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   3篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2009年   3篇
  2008年   3篇
  2004年   1篇
排序方式: 共有26条查询结果,搜索用时 15 毫秒
1.
Hu Z  Zhang R  Zhang Q  Liu Q  Li H 《Brain and language》2012,121(1):70-75
Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information.  相似文献   
2.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   
3.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   
4.
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members’ gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers’ gaze shifts.  相似文献   
5.
We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.  相似文献   
6.
Escoffier N  Tillmann B 《Cognition》2008,107(3):1070-1083
Harmonic priming studies have provided evidence that musical expectations influence sung phoneme monitoring, with facilitated processing for phonemes sung on tonally related (expected) chords in comparison to less-related (less-expected) chords [Bigand, Tillmann, Poulin, D’Adamo, and Madurell (2001). The effect of harmonic context on phoneme monitoring in vocal music. Cognition, 81, B11–B20]. This tonal relatedness effect has suggested two interpretations: (a) processing of music and language interact at some level of processing; and (b) tonal functions of chords influence task performance via listeners’ attention. Our study investigated these hypotheses by exploring whether the effect of tonal relatedness extends to the processing of visually presented syllables (Experiments 1 and 2) and geometric forms (Experiments 3 and 4). For Experiments 1–4, visual target identification was faster when the musical background fulfilled listeners’ expectations (i.e., a related chord was played simultaneously). In Experiment 4, the addition of a baseline condition (i.e., without an established tonal center) further showed that the observed difference was due to a facilitation linked to the related chord and not to an inhibition or disruption caused by the less-related chord. This outcome suggests the influence of musical structures on attentional mechanisms and that these mechanisms are shared between auditory and visual modalities. The implications for research investigating neural correlates shared by music and language processing are discussed.  相似文献   
7.
We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.  相似文献   
8.
This study examined whether training using a nonverbal auditory-visual matching task had a remedial effect on reading skills in developmental dyslexia. The pretest/post-test design was used with Swedish children ( N = 41), between the ages of 7 and 12. Training comprised twice-weekly sessions of 15 minutes, over eight weeks. There was an improvement in auditory-visual matching during the training period. There were also improvements in some reading test scores, especially in reading nonsense words and in reading speed. These improvements in tasks which are thought to rely on phonological processing suggest that such reading difficulties in dyslexia may stem in part from more basic perceptual difficulties, including those required to manage the visual and auditory components of the decoding task. The utility of the concept of auditory structuring is discussed in relation to auditory and phonological processing skills when a child learns to read.  相似文献   
9.
以30名小学二年级学生2、4名小学五年级学生和29名大学一年级学生为被试,运用McGurk效应研究范式对汉语母语者视听双通道言语知觉的表现特点、发展趋势等问题进行了探讨,三个年龄阶段被试均接受纯听和视听两种条件下的测查,被试的任务是出声报告自己听到的刺激。结果发现:(1)汉语为母语的二年级小学生、五年级小学生和大学生在自然听力环境下的单音节加工中都受到视觉线索的影响,表现出了McGurk效应;(2)二年级小学生、五年级小学生和大学生受视觉言语影响的程度,也就是McGurk效应的强度没有显著差异,没有表现出类似英语母语者的发展趋势。该结果支持了McGurk效应"普遍存在"的假说。  相似文献   
10.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号