首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
  2014年   2篇
  2013年   1篇
  2008年   2篇
  2007年   3篇
  2004年   1篇
排序方式: 共有9条查询结果,搜索用时 332 毫秒
1
1.
We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   
2.
Participants made unspeeded temporal order judgments (TOJs) regarding which occurred first, anauditory or a visual target stimulus, when they were presented at a variety of different stimulus onset asynchronies. The target stimuli were presented either in isolation or positioned randomly among a stream of three synchronous audiovisual distractors. The largest just noticeable differences were reported when the targets were presented in the middle of the distractor stream. When the targets were presented at the beginning of the stream, performance was no worse than when the audiovisual targets were presented in isolation. Subsequent experiments revealed that performance improved somewhat when the position of the target was fixed or when the target was made physically distinctive from the distractors. These results show that audiovisual TOJs are impaired by the presence of audiovisual distractors and that this cost can be ameliorated by directing attention to the appropriate temporal position within the stimulus stream.  相似文献   
3.
We investigated whether the “unity assumption,” according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1–3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the “unity assumption” in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   
4.
Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention.  相似文献   
5.
This special issue on temporal processing within and across senses was the outcome of a two-day workshop that took place in Tübingen, Germany. The aim of the workshop and this special issue was to advance our knowledge on timing and the senses and to bring together two lines of research that have not yet interacted, those of synchrony and duration perception.  相似文献   
6.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   
7.
Strybel TZ  Vatakis A 《Perception》2004,33(9):1033-1048
Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.  相似文献   
8.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   
9.

Seventy‐five urban middle class fifth grade children of average and above average reading ability served as subjects in an experiment designed to determine the effects of passage content (schemata) and syntactic structure (clause order) on inferential comprehension of causal structures. Subjects were presented with twelve passages varying in semantic level (easy vs. hard, based on congruence with childhood schemata background) and clause order for the target causal structure (cause‐effect vs. effect‐cause). Subjects were asked to specify the implied meaning of the target structure. An analysis of variance for the randomized block factorial design yielded significant effects for subject variability and semantic content. Although the syntactic variable was not significant, proportions correct suggested a trend favoring the clause order of cause‐effect. An interaction between the variables was not confirmed. The findings are discussed in terms of the roles of schemata and syntax in an interactive model of reading.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号