首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   1篇
  2022年   1篇
  2020年   2篇
  2019年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   3篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2009年   3篇
  2008年   3篇
  2004年   1篇
排序方式: 共有26条查询结果,搜索用时 15 毫秒
11.
This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat.  相似文献   
12.
Japanese 8-month-olds were tested to investigate the matching of particular lip movements to corresponding non-canonical sounds, namely a bilabial trill (BT) and a whistle (WL). The results showed that the infants succeeded in lip-voice matching for the bilabial trill, whereas they failed to do so for the whistle.  相似文献   
13.
In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded ‘Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual ‘Morse-code’ sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities.  相似文献   
14.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   
15.
We investigated whether attention to a talker’s eyes in 12 month-old infants is related to their communication and social abilities. We measured infant attention to a talker’s eyes and mouth with a Tobii eye-tracker and examined the correlation between attention to the talker’s eyes and scores on the Adaptive Behavior Questionnaire from the Bayley Scales of Infant and Toddler Development (BSID-III). Results indicated a positive relationship between eye gaze and scores on the Social and Communication subscales of the BSID-III.  相似文献   
16.
The present functional magnetic resonance imaging (fMRI) study was designed, in order to investigate the neural substrates involved in the audiovisual processing of disyllabic German words and pseudowords. Twelve dyslexic and 13 nondyslexic adults performed a lexical decision task while stimuli were presented unimodally (either aurally or visually) or bimodally (audiovisually simultaneously).The behavioral data collected during the experiment evidenced more accurate processing for bimodally than for unimodally presented stimuli irrespective of group. Words were processed faster than pseudowords. Notably, no group differences have been found for either accuracy or for reaction times. With respect to brain responses, nondyslexic compared to dyslexic adults elicited stronger hemodynamic responses in the leftward supramarginal gyrus (SMG), as well as in the right hemispheric superior temporal sulcus (STS). Furthermore, dyslexic compared to nondyslexic adults showed reduced responses to only aurally presented signals and enhanced hemodynamic responses to audiovisual, as well as visual stimulation in the right anterior insula.Our behavioral results evidence that the two groups easily identified the two-syllabic proper nouns that we provided them with. Our fMRI results indicate that dyslexics show less neuronal involvement of heteromodal and extrasylvian regions, namely, the STS, SMG, and insula when decoding phonological information. We posit that dyslexic adults evidence deficient functioning of word processing, which could possibly be attributed to deficits in phoneme to grapheme mapping. This problem may be caused by impaired audiovisual processing in multimodal areas.  相似文献   
17.
The McGurk effect is usually presented as an example of fast, automatic, multisensory integration. We report a series of experiments designed to directly assess these claims. We used a syllabic version of the speeded classification paradigm, whereby response latencies to the first (target) syllable of spoken word-like stimuli are slowed down when the second (irrelevant) syllable varies from trial to trial. This interference effect is interpreted as a failure of selective attention to filter out the irrelevant syllable. In Experiment 1 we reproduced the syllabic interference effect with bimodal stimuli containing auditory as well as visual lip movement information, thus confirming the generalizability of the phenomenon. In subsequent experiments we were able to produce (Experiment 2) and to eliminate (Experiment 3) syllabic interference by introducing 'illusory' (McGurk) audiovisual stimuli in the irrelevant syllable, suggesting that audiovisual integration occurs prior to attentional selection in this paradigm.  相似文献   
18.
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking.  相似文献   
19.
The ability to predict the effects of actions is necessary to behave properly in our physical and social world. Here, we describe how the ability to predict the consequence of complex gestures can change the way we integrate sight and sound when relevant visual information is missing. Six drummers and six novices were asked to judge audiovisual synchrony for drumming point-light displays where the visual information was manipulated to eliminate or include the drumstick–drumhead impact point. In the condition with only the arm information novices were unable to detect asynchrony whereas drummers were able to. Additionally, in the conditions that included the impact point drummers perceived the best alignment when the sight preceded the sound, while in the arm only condition they perceived the best alignment when the sound occurred together with or preceded the sight, as it would be expected if they were predicting the beat occurrence. Taken together these findings suggest that humans can acquire, through practice, internal models of action which can be used to replace missing information when integrating multisensory signals from the environment.  相似文献   
20.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号