首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   1篇
  2022年   1篇
  2020年   2篇
  2019年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   3篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2009年   3篇
  2008年   3篇
  2004年   1篇
排序方式: 共有26条查询结果,搜索用时 31 毫秒
21.
To interact functionally with our environment, our perception must locate events in time, including discerning whether sensory events are simultaneous. The Temporal Binding Window (TBW; the time window within which two stimuli tend to be integrated into one event) has been shown to relate to individual differences in perception, including schizotypy, but the relationship with subjective estimates of duration is unclear. We compare individual TBWs with individual differences in the filled duration illusion, exploiting differences in perception between empty and filled durations (the latter typically being perceived as longer). Schizotypy has been related to both these measures and is included to explore a potential link between these tasks and enduring perceptual differences. Results suggest that individuals with a narrower TBW make longer estimates for empty durations and demonstrate less variability in both conditions. Exploratory analysis of schizotypy data suggests a relationship with the TBW but is inconclusive regarding time perception.  相似文献   
22.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   
23.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   
24.
The present study examined the effect of visual feedback on the ability to recognise and consolidate pitch information. We trained two groups of nonmusicians to play a piano piece by ear, having one group receiving uninterrupted audiovisual feedback, while allowing the other only to hear, but not see their hand on the keyboard. Results indicate that subjects for whom visual information was deprived showed significantly poorer ability to recognise pitches from the musical piece they had learned. These results are interesting since pitch recognition ability would not intuitively seem to rely on visual feedback. In addition, we show that subjects with previous experience in computer touch-typing made fewer errors during training when trained with no visual feedback, but did not show improved pitch recognition ability posttraining. Our results demonstrate how sensory redundancy increases robustness of learning, and further encourage the use of audiovisual training procedures for facilitating the learning of new skills.  相似文献   
25.
Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our sample as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony.  相似文献   
26.
To avoid collisions, pedestrians intending to cross a road need to accurately estimate the time-to-collision (TTC) of an approaching vehicle. For TTC estimation, auditory information can be considered particularly relevant when the approaching vehicle accelerates. The sound of vehicles with internal combustion engine (ICEVs) provides characteristic auditory information about the acceleration state (increasing rotational speed and engine load). However, for electric vehicles (EVs), the acoustic signature during acceleration is less salient. Although the auditory detection of EVs has been studied extensively, there is no research on potential effects of the altered acoustic signature of EVs on TTC estimation. To close this gap, we compared TTC estimates for ICEVs and for EVs with and without activated acoustic vehicle alerting system (AVAS). We implemented a novel interactive audiovisual virtual-reality system for studying the human perception of approaching vehicles. Using acoustic recordings of real vehicles as source signals, the dynamic spatial sound field corresponding to a vehicle approaching in an urban setting is generated based on physical modeling of the sound propagation between vehicle and pedestrian (listener) and is presented via sound field synthesis (higher-order Ambisonics). In addition to the auditory simulations, the scene was visually presented on a head-mounted display with head tracking. Participants estimated the TTC of vehicles that either approached at a constant speed or accelerated positively. In conditions with constant speed, TTC estimates for EVs with and without AVAS were similar to those for ICEVs. In contrast, for accelerating vehicles, there was a substantial effect of the vehicle type on the TTC estimates. For the EVs, the mean TTC estimates showed a significant overestimation. Thus, subjects on average perceived the time of arrival of the EV at their position as longer than it actually was. The extent of overestimation increased with acceleration and presented TTC. This pattern is similar to a first-order TTC estimation representing a failure to consider the acceleration, which is consistently reported in the literature for visual-only presentations of accelerating objects. In comparison, the overestimation of TTC was largely reduced for the accelerating ICEVs. The AVAS somewhat improved the TTC estimates for the accelerating EVs, but without reaching the same level of accuracy as for the ICEVs. In real traffic scenarios, overestimations of the TTC of approaching vehicles might lead to risky road-crossing decisions. Therefore, our finding that pedestrians are significantly less able to use the acoustic information emitted by accelerating EVs for their TTC judgments, compared to accelerating ICEVs, has important implications for road safety and for the design of AVAS technologies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号