首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The recalibration of perceived visuomotor simultaneity to vision-lead and movement-lead temporal discrepancies is marked by an underlying causal asymmetry, if the movement (button press) is voluntary and self-initiated; a visual stimulus lagging the button press may be interpreted as causally linked sensory feedback (intentional or causal binding), a leading visual stimulus not. Here, we test whether this underlying causal asymmetry leads to directional asymmetries in the temporal recalibration of visuomotor time perception, using an interval estimation paradigm. Participants were trained to the presence of one of three temporal discrepancies between a motor action (button press) and a visual stimulus (flashed disk): 100 ms vision-lead, simultaneity, and 100 ms movement-lead. By adjusting a point on a visual scale, participants then estimated the interval between the visual stimulus and the button press over a range of discrepancies. Comparing the results across conditions, we found that temporal recalibration appears to be implemented nearly exclusively on the movement-lead side of the range of discrepancies by a uni-lateral lengthening or shortening of the window of temporal integration. Interestingly, this marked asymmetry does not lead to a significantly asymmetrical recalibration of the point of subjective simultaneity or to significant differences in discriminability. This seeming contradiction (symmetrical recalibration of subjective simultaneity and asymmetrical recalibration of interval estimation) poses a challenge to common models of temporal order perception that assume an underlying time measurement process with Gaussian noise. Using a two-criterion model of the window of temporal integration, we illustrate that a compressive bias around perceived simultaneity (temporal integration) even prior to perceptual decisions about temporal order would be very hard to detect given the sensitivity of the psychophysical procedures commonly used.  相似文献   

2.
In two experiments we investigated the effects of voluntary movements on temporal haptic perception. Measures of sensitivity (JND) and temporal alignment (PSS) were obtained from temporal order judgments made on intermodal auditory-haptic (Experiment 1) or intramodal haptic (Experiment 2) stimulus pairs under three movement conditions. In the baseline, static condition, the arm of the participants remained stationary. In the passive condition, the arm was displaced by a servo-controlled motorized device. In the active condition, the participants moved voluntarily. The auditory stimulus was a short, 500Hz tone presented over headphones and the haptic stimulus was a brief suprathreshold force pulse applied to the tip of the index finger orthogonally to the finger movement. Active movement did not significantly affect discrimination sensitivity on the auditory-haptic stimulus pairs, whereas it significantly improved sensitivity in the case of the haptic stimulus pair, demonstrating a key role for motor command information in temporal sensitivity in the haptic system. Points of subjective simultaneity were by-and-large coincident with physical simultaneity, with one striking exception in the passive condition with the auditory-haptic stimulus pair. In the latter case, the haptic stimulus had to be presented 45ms before the auditory stimulus in order to obtain subjective simultaneity. A model is proposed to explain the discrimination performance.  相似文献   

3.
Perceived onset simultaneity of stimuli with unequal durations.   总被引:2,自引:0,他引:2  
P Ja?kowski 《Perception》1991,20(6):715-726
Temporal-order judgment was investigated for a pair of visual stimuli with different durations in order to check whether offset asynchrony can disturb the perception of the order/simultaneity of onset. In experiment 1 the point of subjective simultaneity was estimated by the method of adjustment. The difference in duration of the two stimuli in the pair was either 0 or 50 ms. It was found that the subject shifts the onset of the shorter stimulus towards the offset of the longer one to obtain a satisfying impression of simultaneity even though the subject was asked to ignore the events concerning the stimulus offset. In experiments 2 and 3 the method of constant stimulus was applied. Both experiments indicate that subjects, in spite of instruction, take into account the offset asynchrony in their judgment.  相似文献   

4.
Audiovisual phenomenal causality   总被引:1,自引:0,他引:1  
We report three experiments in which visual or audiovisual displays depicted a surface (target) set into motion shortly after one or more events occurred. A visual motion was used as an initial event, followed directly either by the target motion or by one of three marker events: a collision sound, a blink of the target stimulus, or the blink together with the sound. The delay between the initial event and the onset of the target motion was varied systematically. The subjects had to rate the degree of perceived causality between these events. The results of the first experiment showed a systematic decline of causality judgments with an increasing time delay. Causality judgments increased when additional auditory or visual information marked the onset of the target motion. Visual blinks of the target and auditory clacks produced similar causality judgments. The second experiment tested several models of audiovisual causal processing by varying the position of the sound within the visual delay period. No systematic effect of the sound position occurred. The third experiment showed a subjective shortening of delays filled by a clack sound, as compared with unfilled delays. However, this shortening cannot fully explain the increased tolerance for delays containing the clack sound. Taken together, the results are consistent with the interpretation that the main source of the causality judgments in our experiments is the impression of a plausible unitary event and that perfect synchrony is not necessary in this case.  相似文献   

5.
Synchrony perception for audio-visual stimulus pairs is typically studied by using temporal order judgment (TOJ) or synchrony judgment (SJ) tasks. Research has shown that estimates of the point of subjective simultaneity (PSS) obtained using these two methods do not necessarily correspond. Here, we investigate the hypothesis that the PSS estimate obtained in a TOJ task is shifted in the direction of the most sensitive part of the synchrony judgment curve, as obtained in an SJ task. The results confirm that criterion shifts in the TOJ task move the PSS toward regions of the audio-visual temporal interval continuum where discriminations are most sensitive.  相似文献   

6.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

7.
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.  相似文献   

8.
The thesis that we can visually perceive causal relations is distinct from the thesis that visual experiences can represent causal relations. I defend the latter thesis about visual experience, and argue that although they are suggestive, the data provided by Albert Michotte's experiments on perceptual causality do not establish this thesis. Turning to the perception of causality, I defend the claim that we can perceive causation against the objection that its arcane features are unlikely to be represented in experience.  相似文献   

9.
Directed attention and perception of temporal order.   总被引:5,自引:0,他引:5  
The present research examined the effects of directed attention on speed of information transmission in the visual system. Ss judged the temporal order of 2 stimuli while directing attention toward 1 of the stimuli or away from both stimuli. Perception of temporal order was influenced by directed attention: Given equal onset times, the attended stimulus appeared to occur before the unattended stimulus. Direction of attention also influenced the perception of simultaneity. The findings support the notion that attention affects the speed of transmission of information in the visual system. To account for the pattern of temporal order and simultaneity judgments, a model is proposed in which the temporal profile of visual responses is affected by directed attention.  相似文献   

10.
When making decisions as to whether or not to bind auditory and visual information, temporal and stimulus factors both contribute to the presumption of multimodal unity. In order to study the interaction between these factors, we conducted an experiment in which auditory and visual stimuli were placed in competitive binding scenarios, whereby an auditory stimulus was assigned to either a primary or a secondary anchor in a visual context (VAV) or a visual stimulus was assigned to either a primary or secondary anchor in an auditory context (AVA). Temporal factors were manipulated by varying the onset of the to-be-bound stimulus in relation to the two anchors. Stimulus factors were manipulated by varying the magnitudes of the visual (size) and auditory (intensity) signals. The results supported the dominance of temporal factors in auditory contexts, in that effects of time were stronger in AVA than in VAV contexts, and stimulus factors in visual contexts, in that effects of magnitude were stronger in VAV than in AVA contexts. These findings indicate the precedence for temporal factors, with particular reliance on stimulus factors when the to-be-assigned stimulus was temporally ambiguous. Stimulus factors seem to be driven by high-magnitude presentation rather than cross-modal congruency. The interactions between temporal and stimulus factors, modality weighting, discriminability, and object representation highlight some of the factors that contribute to audio–visual binding.  相似文献   

11.
Sætrevik, B. (2010). The influence of visual information on auditory lateralization. Scandinavian Journal of Psychology. The classic McGurk study showed that presentation of one syllable in the visual modality simultaneous with a different syllable in the auditory modality creates the perception of a third, not presented syllable. The current study presented dichotic syllable pairs (one in each ear) simultaneously with video clips of a mouth pronouncing the syllables from one of the ears, or pronouncing a syllable that was not part of the dichotic pair. When asked to report the auditory stimuli, responses were shifted towards selecting the auditory stimulus from the side that matched the visual stimulus.  相似文献   

12.
When an audio-visual event is perceived in the natural environment, a physical delay will always occur between the arrival of the leading visual component and that of the trailing auditory component. This natural timing relationship suggests that the point of subjective simultaneity (PSS) should occur at an auditory delay greater than or equal to 0 msec. A review of the literature suggests that PSS estimates derived from a temporal order judgment (TOJ) task differ from those derived from a synchrony judgment (SJ) task, with (unnatural) auditory-leading PSS values reported mainly for the TOJ task. We report data from two stimulus types that differed in terms of complexity--namely, (1) a flash and a click and (2) a bouncing ball and an impact sound. The same participants judged the temporal order and synchrony of both stimulus types, using three experimental methods: (1) a TOJ task with two response categories ("audio first" or "video first"), (2) an SJ task with two response categories ("synchronous" or "asynchronous"; SJ2), and (3) an SJ task with three response categories ("audio first," "synchronous," or "video first"; SJ3). Both stimulus types produced correlated PSS estimates with the SJ tasks, but the estimates from the TOJ procedure were uncorrelated with those obtained from the SJ tasks. These results suggest that the SJ task should be preferred over the TOJ task when the primary interest is i n perceived audio-visualsynchrony.  相似文献   

13.
The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio—manual stimulus—response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.  相似文献   

14.
本研究采用实验法考察了视听单通道及双通道下产品空间位置与价格音节长度对消费者价格感知的潜在影响。结果发现,呈现在空间右侧以及以较长音节播放的产品使得被试的价格感知升高;短音节条件下,个体对空间右侧产品的价格感知显著高于空间左侧,长音节条件下两侧则无显著差异;相对于单通道刺激呈现,视听通道呈现信息一致时出现信息冗余效应,个体判断的速度更快,验证了共激活模型假设,视听不一致条件下的速度降低则支持了预测编码模型。  相似文献   

15.
本研究采用实验法考察了视听单通道及双通道下产品空间位置与价格音节长度对消费者价格感知的潜在影响。结果发现,呈现在空间右侧以及以较长音节播放的产品使得被试的价格感知升高;短音节条件下,个体对空间右侧产品的价格感知显著高于空间左侧,长音节条件下两侧则无显著差异;相对于单通道刺激呈现,视听通道呈现信息一致时出现信息冗余效应,个体判断的速度更快,验证了共激活模型假设,视听不一致条件下的速度降低则支持了预测编码模型。  相似文献   

16.
In this study the ability of newborn infants to learn arbitrary auditory–visual associations in the absence versus presence of amodal (redundant) and contingent information was investigated. In the auditory-noncontingent condition 2-day-old infants were familiarized to two alternating visual stimuli (differing in colour and orientation), each accompanied by its ‘own’ sound: when the visual stimulus was presented the sound was continuously presented, independently of whether the infant looked at the visual stimulus. In the auditory-contingent condition the auditory stimulus was presented only when the infant looked at the visual stimulus: thus, presentation of the sound was contingent upon infant looking. On the post-familiarization test trials attention recovered strongly to a novel auditory–visual combination in the auditory-contingent condition, but remained low, and indistinguishable from attention to the familiar combination, in the auditory-noncontingent condition. These findings are a clear demonstration that newborn infants’ learning of arbitrary auditory–visual associations is constrained and guided by the presence of redundant (amodal) contingent information. The findings give strong support to Bahrick’s theory of early intermodal perception.  相似文献   

17.
Choi H  Scholl BJ 《Perception》2006,35(3):385-399
In simple dynamic events we can easily perceive not only motion, but also higher-level properties such as causality, as when we see one object collide with another. Several researchers have suggested that such causal perception is an automatic and stimulus-driven process, sensitive only to particular sorts of visual information, and a major research project has been to uncover the nature of these visual cues. Here, rather than investigating what information affects causal perception, we instead explore the temporal dynamics of when certain types of information are used. Surprisingly, we find that certain visual events can determine whether we perceive a collision in an ambiguous situation even when those events occur after the moment of potential 'impact' in the putative collision has already passed. This illustrates a type of postdictive perception: our conscious perception of the world is not an instantaneous moment-by-moment construction, but rather is formed by integrating information presented within short temporal windows, so that new information which is obtained can influence the immediate past in our conscious awareness. Such effects have been previously demonstrated for low-level motion phenomena, but the present results demonstrate that postdictive processes can influence higher-level event perception. These findings help to characterize not only the 'rules' of causal perception, but also the temporal dynamics of how and when those rules operate.  相似文献   

18.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

19.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

20.
Contrary to the predictions of established theory, Schutz and Lipscomb (2007) have shown that visual information can influence the perceived duration of concurrent sounds. In the present study, we deconstruct the visual component of their illusion, showing that (1) cross-modal influence depends on visible cues signaling an impact event (namely, a sudden change of direction concurrent with tone onset) and (2) the illusion is controlled primarily by the duration of post-impact motion. Other aspects of the post-impact motion—distance traveled, velocity, acceleration, and the rate of its change (i.e., its derivative, jerk)—play a minor role, if any. Together, these results demonstrate that visual event duration can influence the perception of auditory event duration, but only when stimulus cues are sufficient to give rise to the perception of a causal cross-modal relationship. This refined understanding of the illusion’s visual aspects is helpful in comprehending why it contrasts so markedly with previous research on cross-modal integration, demonstrating that vision does not appreciably influence auditory judgments of event duration (Walker & Scott, 1981).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号