首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Ecker AJ  Heller LM 《Perception》2005,34(1):59-75
We carried out two experiments to measure the combined perceptual effect of visual and auditory information on the perception of a moving object's trajectory. All visual stimuli consisted of a perspective rendering of a ball moving in a three-dimensional box. Each video was paired with one of three sound conditions: silence, the sound of a ball rolling, or the sound of a ball hitting the ground. We found that the sound condition influenced whether observers were more likely to perceive the ball as rolling back in depth on the floor of the box or jumping in the frontal plane. In a second experiment we found further evidence that the reported shift in path perception reflects perceptual experience rather than a deliberate decision process. Instead of directly judging the ball's path, observers judged the ball's speed. Speed is an indirect measure of the perceived path because, as a result of the geometry of the box and the viewing angle, a rolling ball would travel a greater distance than a jumping ball in the same time interval. Observers did judge a ball paired with a rolling sound as faster than a ball paired with a jumping sound. This auditory-visual interaction provides an example of a unitary percept arising from multisensory input.  相似文献   

2.
Two experiments were pedormed under visual-only and visual-auditory discrepancy conditions (dubs) to assess observers’ abilities to read speech information on a face. In the first experiment, identification and multiple choice testing were used. In addition, the relation between visual and auditory phonetic information was manipulated and related to perceptual bias. In the second experiment, the “compellingness” of the visual-auditory discrepancy as a single speech event was manipulated. Subjects also rated the confidence they had that their perception of the lipped word was accurate. Results indicated that competing visual information exerted little effect on auditory speech recognition, but visual speech recognition was substantially interfered with when discrepant auditory information was present. The extent of auditory bias was found to be related to the abilities of observers to read speech under nondiscrepancy conditions, the magnitude of the visual-auditory discrepancy, and the compellingheSS of the visual-auditory discrepancy as a single event. Auditory bias during speech was found to be a moderately compelling conscious experience, and not simply a case of confused responding or guessing. Results were discussed in terms of current models of perceptual dominance and related to results from modality discordance during space perception.  相似文献   

3.
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target–distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.  相似文献   

4.
5.
In musical performance, bodily gestures play an important role in communicating expressive intentions to audiences. Although previous studies have demonstrated that visual information can have an effect on the perceived expressivity of musical performances, the investigation of audiovisual interactions has been held back by the technical difficulties associated with the generation of controlled, mismatching stimuli. With the present study, we aimed to address this issue by utilizing a novel method in order to generate controlled, balanced stimuli that comprised both matching and mismatching bimodal combinations of different expressive intentions. The aim of Experiment 1 was to investigate the relative contributions of auditory and visual kinematic cues in the perceived expressivity of piano performances, and in Experiment 2 we explored possible crossmodal interactions in the perception of auditory and visual expressivity. The results revealed that although both auditory and visual kinematic cues contribute significantly to the perception of overall expressivity, the effect of visual kinematic cues appears to be somewhat stronger. These results also provide preliminary evidence of crossmodal interactions in the perception of auditory and visual expressivity. In certain performance conditions, visual cues had an effect on the ratings of auditory expressivity, and auditory cues had a small effect on the ratings of visual expressivity.  相似文献   

6.
This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat.  相似文献   

7.
The tendency for observers to overestimate slant is not simply a visual illusion but can also occur with another sense, such as proprioception, as in the case of overestimation of self-body tilt. In the present study, distortion in the perception of body tilt was examined as a function of gender and multisensory spatial information. We used a full-body-tilt apparatus to test when participants experienced being tilted by 45 degrees, with visual and auditory cues present or absent. Body tilt was overestimated in all conditions, with the largest bias occurring when there were no visual or auditory cues. Both visual and auditory information independently improved performance. We also found a gender difference, with women exhibiting more bias in the absence of auditory information and more improvement when auditory information was added. The findings support the view that perception of body tilt is multisensory and that women more strongly utilize auditory information in such multisensory spatial judgments.  相似文献   

8.
It has been widely shown that human observers are able to perceive lifted weight from the observation of a point-light display of the lifter's action. In the experiments reported here, the kinematic information used by observers to perceive a lifted weight was determined. In Experiment 1, observers (N equals: 30) were able to identify weights (5, 10, 15, 20, and 25 kg) successfully by observing only the lift phase of the action. Other procedures, such as walking while holding the weight and placing the weight on a table, did not result in significantly improved estimations. In Experiment 2, the kinematic patterns used by 4 lifters with weights varying from 5 to 25 kg were examined. Changes in weight lifted resulted in changes in lift velocity, hip angle, and dwell time. In Experiment 3, in which 15 observers participated, these 3 kinematic variables were experimentally manipulated. The results indicated that observation was most significantly influenced by variations in lift velocity. The results are discussed in relation to kinematic specification of dynamics and heuristic approaches.  相似文献   

9.
This study investigated whether explicit beat induction in the auditory, visual, and audiovisual (bimodal) modalities aided the perception of weakly metrical auditory rhythms, and whether it reinforced attentional entrainment to the beat of these rhythms. The visual beat-inducer was a periodically bouncing point-light figure, which aimed to examine whether an observed rhythmic human movement could induce a beat that would influence auditory rhythm perception. In two tasks, participants listened to three repetitions of an auditory rhythm that were preceded and accompanied by (1) an auditory beat, (2) a bouncing point-light figure, (3) a combination of (1) and (2) synchronously, or (4) a combination of (1) and (2), with the figure moving in anti-phase to the auditory beat. Participants reproduced the auditory rhythm subsequently (Experiment 1), or detected a possible temporal change in the third repetition (Experiment 2). While an explicit beat did not improve rhythm reproduction, possibly due to the syncopated rhythms when a beat was imposed, bimodal beat induction yielded greater sensitivity to a temporal deviant in on-beat than in off-beat positions. Moreover, the beat phase of the figure movement determined where on-beat accents were perceived during bimodal induction. Results are discussed with regard to constrained beat induction in complex auditory rhythms, visual modulation of auditory beat perception, and possible mechanisms underlying the preferred visual beat consisting of rhythmic human motions.  相似文献   

10.
Rhythmically bouncing a ball with a racket was investigated and modeled with a nonlinear map. Model analyses provided a variable defining a dynamically stable solution that obviates computationally expensive corrections. Three experiments evaluated whether dynamic stability is optimized and what perceptual support is necessary for stable behavior. Two hypotheses were tested: (a) Performance is stable if racket acceleration is negative at impact, and (b) variability is lowest at an impact acceleration between -4 and -1 m/s2. In Experiment 1 participants performed the task, eyes open or closed, bouncing a ball confined to a 1-dimensional trajectory. Experiment 2 eliminated constraints on racket and ball trajectory. Experiment 3 excluded visual or haptic information. Movements were performed with negative racket accelerations in the range of highest stability. Performance with eyes closed was more variable, leaving acceleration unaffected. With haptic information, performance was more stable than with visual information alone.  相似文献   

11.
Research has shown that auditory speech recognition is influenced by the appearance of a talker's face, but the actual nature of this visual information has yet to be established. Here, we report three experiments that investigated visual and audiovisual speech recognition using color, gray-scale, and point-light talking faces (which allowed comparison with the influence of isolated kinematic information). Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent, and incongruent audiovisual speech stimuli. Visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for color and gray-scale faces and were much greater than for point-light faces. These results indicate that luminance, rather than color, underlies visual and audiovisual speech perception and that this information is more than the kinematic information provided by point-light faces. Implications for processing visual and audiovisual speech are discussed.  相似文献   

12.
Humans are able to perceive unique types of biological motion presented as point-light displays (PLDs). Thirty years ago, Runeson and Frykholm (Human Perception and Performance, 7(4), 733, 1981, Journal of Experimental Psychology: General, 112(4), 585, 1983) studied observers’ perceptions of weights lifted by actors and identified that the kinematic information in a PLD is sufficient for an observer to form an accurate perception of the object weight. However, research has also shown that extrinsic object size characteristics also influence the perception of object weight (Gordon, Forssberg, Johansson, & Westling in Experimental Brain Research, 83(3), 477–482, 1991). This study addresses the relative contributions of these two types of visual information to observers’ perceptions of lifted weight, through an experiment in which participants viewed an actor lifting boxes of various sizes (small, medium, or large) and weights (25, 50, or 75 lb) under four PLD conditions—box-at-rest, moving-box, actor-only, and actor-and-box—and one full-vision video condition, and then provided a weight estimate for each box lifted. The results indicated that lift kinematics and box size contributed independently to weight perception. Interestingly, the most robust weight differentiations were elicited in the conditions in which both types of information were presented concurrently, despite their converse natures. Furthermore, full-vision video presentation, which contained visual information beyond kinematics and object information, elicited the best estimates.  相似文献   

13.
The present research addresses the question of how visual predictive information and implied causality affect audio–visual synchrony perception. Previous research has shown a systematic shift in the likelihood of observers to accept audio-leading stimulus pairs as being apparently simultaneous in variants of audio–visual stimulus pairs that differ in (1) the amount of visual predictive information available and (2) the apparent causal relation between the auditory and visual components. An experiment was designed to separate the predictability and causality explanations, and the results indicated that shifts in subjective simultaneity were explained completely by changes in the implied causal relations in the stimuli and that predictability had no added value. Together with earlier findings, these results further indicate that the observed shifts in subjective simultaneity due to causal relations among auditory and visual events do not reflect a mere change in response strategy, but rather result from early multimodal integration processes in event perception.  相似文献   

14.
Mitroff SR  Scholl BJ 《Perception》2004,33(10):1267-1273
Because of the massive amount of incoming visual information, perception is fundamentally selective. We are aware of only a small subset of our visual input at any given moment, and a great deal of activity can occur right in front of our eyes without reaching awareness. While previous work has shown that even salient visual objects can go unseen, here we demonstrate the opposite pattern, wherein observers perceive stimuli which are not physically present. In particular, we show in two motion-induced blindness experiments that unseen objects can momentarily reenter awareness when they physically disappear: in some situations, you can see the disappearance of something you can't see. Moreover, when a stimulus changes outside of awareness in this situation and then physically disappears, observers momentarily see the altered version--thus perceiving properties of an object that they had never seen before, after that object is already gone. This phenomenon of 'perceptual reentry' yields new insights into the relationship between visual memory and conscious awareness.  相似文献   

15.
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.  相似文献   

16.
The mechanical events of bouncing and breaking are acoustically specified by single versus multiple damped quasi-periodic pulse patterns, with an initial noise burst in the case of breaking. Subjects show high accuracy in categorizing natural tokens of bouncing and breaking glass as well as tokens constructed by adjusting only the temporal patterns of components, leaving their spectral properties constant. Differences in average spectral frequency are, therefore, not necessary for perceiving this contrast, though differences in spectral consistency over successive pulses may be important. Initial noise corresponding to glass rupture appears unnecessary to categorize breaking and bouncing. The data indicate that higher order temporal properties of the acoustic signal provide information for the auditory perception of these events.  相似文献   

17.
Choi H  Scholl BJ 《Perception》2006,35(3):385-399
In simple dynamic events we can easily perceive not only motion, but also higher-level properties such as causality, as when we see one object collide with another. Several researchers have suggested that such causal perception is an automatic and stimulus-driven process, sensitive only to particular sorts of visual information, and a major research project has been to uncover the nature of these visual cues. Here, rather than investigating what information affects causal perception, we instead explore the temporal dynamics of when certain types of information are used. Surprisingly, we find that certain visual events can determine whether we perceive a collision in an ambiguous situation even when those events occur after the moment of potential 'impact' in the putative collision has already passed. This illustrates a type of postdictive perception: our conscious perception of the world is not an instantaneous moment-by-moment construction, but rather is formed by integrating information presented within short temporal windows, so that new information which is obtained can influence the immediate past in our conscious awareness. Such effects have been previously demonstrated for low-level motion phenomena, but the present results demonstrate that postdictive processes can influence higher-level event perception. These findings help to characterize not only the 'rules' of causal perception, but also the temporal dynamics of how and when those rules operate.  相似文献   

18.
The information that people use to perceive whether a tool is suitable for a certain task depends on what is available at a given time. Visually scanning a tool and wielding it each provide information about the functional attributes of the tool. In experiment 1, we investigated the relative contributions of vision and dynamic touch to perceiving the suitability of various tools for various tasks. The results show that, when both vision and dynamic touch are available, the visual information dominates. When limited to dynamic touch, ratings of suitability are constrained by the inertial properties of the tool, and the inertial properties that are exploited depend on the task. In experiment 2, we asked whether the manner in which a tool is manipulated in exploration depends on the task for which it is being evaluated. The results suggest that tools are manipulated in ways that reflect intentions to perceive particular affordances. Exploratory movements sometimes mimic performatory movements.  相似文献   

19.
Although visual perception traditionally has been considered to be impenetrable by non-visual information, there are a rising number of reports discussing cross-modal influences on visual perception. In two experiments, we investigated how coinciding vibrotactile stimulation affects the perception of two discs that move toward each other, superimpose in the center of the screen, and then move apart. Whereas two discs streaming past each other was the dominant impression when the visual event was presented in isolation, a brief coinciding vibrotactile stimulation at the moment of overlap biased the visual impression toward two discs bouncing off each other (Experiment 1). Further, the vibrotactile stimulation actually changed perceptual processing by reducing the amount of perceived overlap between the discs (Experiment 2), which has been demonstrated to be associated with a higher proportion of bouncing impressions. We propose that tactile-induced quantitative changes in the visual percept might alter the quality of the visual percept (from streaming to bouncing), thereby adding to the understanding of how cross-modal information interacts with early visual perception and how this interaction influences subsequent visual impressions.  相似文献   

20.
To examine the mechanism of visual perception of human-like body postures, we conducted a posture recognition task, a questionnaire survey, and the Interpersonal Reactivity Index (IRI). The majority of participants perceived the pseudo-posture as a human posture in the early stage (78%), but only approximately half of them reported the imagination of bodily movement (66%). These results suggest that the majority of observers perceive pseudo-postures as human postures in the early stage of perception, but this human posture perception does not necessarily lead to the visualisation of bodily movement. In a group of who received the pseudo posture as a human-posture regardless of the perception stages, the participants who imagined bodily movement (64%) showed significantly higher scores than those who did not on the Fantasy subscale of the IRI. Highly empathic participants are more likely to detect a kinematic relation between the pseudo-postures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号