首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examined how spatial and temporal characteristics of the perception of self-motion, generated by constant velocity visual motion, was reflected in orientation of the head and whole body of young adults standing in a CAVE, a virtual environment that presents wide field of view stereo images with context and texture. Center of pressure responses from a force plate and perception of self-motion through orientation of a hand-held wand were recorded. The influence of the perception of self-motion on postural kinematics differed depending upon the plane and complexity of visual motion. Postural behaviors generated through the perception of self-motion appeared to contain a confluence of the cortically integrated visual and vestibular signals and of other somatosensory inputs. This would suggest that spatial representation during motion in the environment is modified by both ascending and descending controls. We infer from these data that motion of the visual surround can be used as a therapeutic tool to influence posture and spatial orientation, particularly in more visually sensitive individuals following central nervous system (CNS) impairment.  相似文献   

2.
We report three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm. Participants discriminated the elevation (up vs. down) of auditory and tactile targets presented to either the left or the right of fixation. In Experiment 1, targets were expected on a particular side in just one modality; the results demonstrated that the participants could spatially shift their attention independently in both audition and touch. Experiment 2 demonstrated that when the participants were informed that targets were more likely to be on one side for both modalities, elevation judgments were faster on that side in both audition and touch. The participants were also able to "split" their auditory and tactile attention, albeit at some cost, when targets in the two modalities were expected on opposite sides. Similar results were also reported in Experiment 3 when participants adopted a crossed-hands posture, thus revealing that crossmodal links in audiotactile attention operate on a representation of space that is updated following posture change. These results are discussed in relation to previous findings regarding crossmodal links in audiovisual and visuotactile covert spatial attentional orienting.  相似文献   

3.
Prior research suggested that pride is recognized only when a head and facial expression (e.g., tilted head with a slight smile) is combined with a postural expression (e.g., expanded body and arm gestures). However, these studies used static photographs. In the present research, participants labeled the emotion conveyed by four dynamic cues to pride, presented as video clips: head and face alone, body posture alone, voice alone, and an expression in which head and face, body posture, and voice were presented simultaneously. Participants attributed pride to the head and face alone, even when postural or vocal information was absent. Pride can be conveyed without body posture or voice.  相似文献   

4.
Studies of auditory localization revealed that where a subject hears a sound is dependent on both his perceived head position and the auditory cues at his ears. If an error is induced between his true and registered head posture, then errors in his auditory localizations of corresponding size and time course result. The presence of visual information prevents the development of postural errors and, consequently, prevents the development of errors in auditory localization, too. These observations are related to the oculogravic illusion and are interpreted as one aspect of the functioning of a spatial reference system involved in the maintenance of the constancies of auditory and visual detection.  相似文献   

5.
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective taking when recalling touched objects was best from perspectives aligned with visually-defined axes, providing evidence for cross-sensory reference frame transfer. These findings advance spatial memory theory by demonstrating that multimodal spatial information can be integrated within a common spatial representation.  相似文献   

6.
Abstract: Despite previous failures to identify visual‐upon‐auditory spatial‐cuing effects, recent studies have demonstrated that the abrupt onset of a lateralized visual stimulus triggers a shift of spatial attention in response to auditory judgment. Nevertheless, whether a centrally presented visual stimulus orients auditory attention remained unclear. The present study investigated whether centrally presented gaze cues trigger a reflexive shift of attention in response to auditory judgment. Participants fixated on a schematic face in which the eyes looked left or right (the cue). A target sound was then presented to the left or right of the cue. Participants judged the direction of the target as quickly as possible. Even though participants were told that the gaze direction did not predict the direction of the target, the response time was significantly faster when the gaze was in the target direction than when it was in the non‐target direction. These findings provide initial evidence for visual‐upon‐auditory spatial‐cuing effects produced by centrally presented cues, suggesting that a reflexive crossmodal shift of attention does occur with a centrally presented visual stimulus.  相似文献   

7.
Spence C  Walton M 《Acta psychologica》2005,118(1-2):47-70
We investigated the extent to which people can selectively ignore distracting vibrotactile information when performing a visual task. In Experiment 1, participants made speeded elevation discrimination responses (up vs. down) to a series of visual targets presented from one of two eccentricities on either side of central fixation, while simultaneously trying to ignore task-irrelevant vibrotactile distractors presented independently to the finger (up) vs. thumb (down) of either hand. Participants responded significantly more slowly, and somewhat less accurately, when the elevation of the vibrotactile distractor was incongruent with that of the visual target than when they were presented from the same (i.e., congruent) elevation. This crossmodal congruency effect was significantly larger when the visual and tactile stimuli appeared on the same side of space than when they appeared on different sides, although the relative eccentricity of the two stimuli within the hemifield (i.e., same vs. different) had little effect on performance. In Experiment 2, participants who crossed their hands over the midline showed a very different pattern of crossmodal congruency effects to participants who adopted an uncrossed hands posture. Our results suggest that both the relative external location and the initial hemispheric projection of the target and distractor stimuli contribute jointly to determining the magnitude of the crossmodal congruency effect when participants have to respond to vision and ignore touch.  相似文献   

8.
Haptic cues from fingertip contact with a stable surface attenuate body sway in subjects even when the contact forces are too small to provide physical support of the body. We investigated how haptic cues derived from contact of a cane with a stationary surface at low force levels aids postural control in sighted and congenitally blind individuals. Five sighted (eyes closed) and five congenitally blind subjects maintained a tandem Romberg stance in five conditions: (1) no cane; (2, 3) touch contact (<2 N of applied force) while holding the cane in a vertical or slanted orientation; and (4, 5) force contact (as much force as desired) in the vertical and slanted orientations. Touch contact of a cane at force levels below those necessary to provide significant physical stabilization was as effective as force contact in reducing postural sway in all subjects, compared to the no-cane condition. A slanted cane was far more effective in reducing postural sway than was a perpendicular cane. Cane use also decreased head displacement of sighted subjects far more than that of blind subjects. These results suggest that head movement control is linked to postural control through gaze stabilization reflexes in sighted subjects; such reflexes are absent in congenitally blind individuals and may account for their higher levels of head displacement.  相似文献   

9.
The authors examined how a conditioned stimulus (CS) that included species-typical cues affected the acquisition and extinction of conditioned sexual responses in male quail (Coturnix japonica). Some subjects were conditioned with a CS that supported sexual responses and included a taxidermic head of a female quail. Others were conditioned with a similar CS that lacked species-typical cues. Pairing the CSs with access to live females increased CS-directed behavior, with the head CS eliciting significantly more responding than the no-head CS. Responding to the head CS persisted during the 42-day, 126-trial extinction phase; responses to the no-head CS extinguished. Responding declined when the cues were removed or the subjects were sexually satiated. Possible functions and mechanisms of these effects are discussed.  相似文献   

10.
In a visual-tactile interference paradigm, subjects judged whether tactile vibrations arose on a finger or thumb (upper vs. lower locations), while ignoring distant visual distractor lights that also appeared in upper or lower locations. Incongruent visual distractors (e.g. a lower light combined with upper touch) disrupt such tactile judgements, particularly when appearing near the tactile stimulus (e.g. on the same side of space as the stimulated hand). Here we show that actively wielding tools can change this pattern of crossmodal interference. When such tools were held in crossed positions (connecting the left hand to the right visual field, and vice-versa), the spatial constraints on crossmodal interference reversed, so that visual distractors in the other visual field now disrupted tactile judgements most for a particular hand. This phenomenon depended on active tool-use, developing with increased experience in using the tool. We relate these results to recent physiological and neuropsychological findings.  相似文献   

11.
In two experiments, fear was conditioned to the situational cues in one compartment of a hurdle-jumping apparatus and was then extinguished. Subsequently, either one shock (Experiment 1) or three or nine shocks (Experiment 2) were given in a situation distinctively different from that in which conditioning and extinction had taken place. Although some associative strength between the situational cues and fear was shown to have remained after extinction, in neither experiment did the postextinction-shock treatment increase the fear elicited by these cues: Escape-from-fear performance was no better in the shocked groups than in control groups given no additional shock. Thus, the nonassociative hypothesis which postulates that inflating the value of the representation of the UCS with shock-alone presentations can reinstate the extinguished fear of a stimulus was not supported. Rather, the results showed that, after extinction, an increase in fear of a simulus depended on further conditioning to that stimulus. The data also indicated that the nonvisual components of the situational cues predominated over the visual component.  相似文献   

12.
Everyday experience involves the continuous integration of information from multiple sensory inputs. Such crossmodal interactions are advantageous since the combined action of different sensory cues can provide information unavailable from their individual operation, reducing perceptual ambiguity and enhancing responsiveness. The behavioural consequences of such multimodal processes and their putative neural mechanisms have been investigated extensively with respect to orienting behaviour and, to a lesser extent, the crossmodal coordination of spatial attention. These operations are concerned mainly with the determination of stimulus location. However, information from different sensory streams can also be combined to assist stimulus identification. Psychophysical and physiological data indicate that these two crossmodal processes are subject to different temporal and spatial constraints both at the behavioural and neuronal level and involve the participation of distinct neural substrates. Here we review the evidence for such a dissociation and discuss recent neurophysiological, neuroanatomical and neuroimaging findings that shed light on the mechanisms underlying crossmodal identification, with specific reference to audio-visual speech perception.  相似文献   

13.
The growth of stability: postural control from a development perspective   总被引:5,自引:0,他引:5  
This study compared central nervous system organizational processes underlying balance in children of three age groups: 15-31 months, 4-6 years, and 7-10 years, using a movable platform capable of antero-posterior (A-P) displacements or dorsi-plantar flexing rotations of the ankle joint. A servo system capable of linking platform rotations to A-P sway angle allowed disruption of ankle joint inputs, to test the effects of incongruent sensory inputs on response patterns. Surface electromyography was used to quantify latency and response patterns. Surface electromyography was used to quantify latency and amplitude of the gastrocnemius, hamstrings, tibialis anterior, and quadriceps muscle responses. Cinematography provided biomechanical analysis of the sway motion. Results demonstrated that while directionally specific response synergies are present in children under the age of six, structured organization of the synergies is not yet fully developed since variability in timing and amplitude relationships between proximal and distal muscles is high. Transition from immature to mature response patterns was not linear but stage-like with greatest variability in the 4- to 6- year-old children. Results from balance tests under altered sensory conditions (eyes closed and/or ankle joint inputs altered) suggested that: (a) with development a shift in controlling inputs to posture from visual dependence to more adult-like dependence on a combination of ankle joint and visual inputs occurred in the 4- to 6-year-old, and reached adult form in the 7- to 10-year-old age group. It is proposed that the age 4-6 is a transition period in the development of posture control. At this time the nervous system (a) uses visual-vestibular inputs to fine tune ankle-joint proprioception in preparation for its increased importance in posture control and (b) fine tunes the structural organization of the postural synergies themselves.  相似文献   

14.
In six experiments, we used the Müller-Lyer illusion to investigate factors in the integration of touch, movement, and spatial cues in haptic shape perception, and in the similarity with the visual illusion. Latencies provided evidence against the hypothesis that scanning times explain the haptic illusion. Distinctive fin effects supported the hypothesis that cue distinctiveness contributes to the illusion, but showed also that it depends on modality-specific conditions, and is not the main factor. Allocentric cues from scanning an external frame (EF) did not reduce the haptic illusion. Scanning elicited downward movements and more negative errors for horizontal convergent figures and more positive errors for vertical divergent figures, suggesting a modality-specific movement effect. But the Müller-Lyer illusion was highly significant for both vertical and horizontal figures. By contrast, instructions to use body-centered reference and to ignore the fins reduced the haptic illusion for vertical figures in touch from 12.60% to 1.7%. In vision, without explicit egocentric reference, instructions to ignore fins did not reduce the illusion to near floor level, though external cues were present. But the visual illusion was reduced to the same level as in touch with instructions that included the use of body-centered cues. The new evidence shows that the same instructions reduced the Müller-Lyer illusion almost to zero in both vision and touch. It suggests that the similarity of the illusions is not fortuitous. The results on touch supported the hypothesis that body-centered spatial reference is involved in integrating inputs from touch and movement for accurate haptic shape perception. The finding that explicit egocentric reference had the same effect on vision suggests that it may be a common factor in the integration of disparate inputs from multisensory sources.  相似文献   

15.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

16.
Asymmetrical hand use by rhesus monkeys (Macaca mulatta) was investigated in a series of tactually and visually guided tasks. The 1st experiment recorded manual preferences of 29 monkeys for solving a haptic discrimination task in a hanging posture. There was a left-hand population bias: 21 monkeys had a left-hand bias, 4 a right-hand bias, and 4 no bias. The 2nd experiment, 4 tasks with 23 to 51 monkeys, investigated the critical components of the 1st experiment by varying the posture (hanging, sitting, or tripedal) and the sensory requirements (tactile or visual). Posture influenced hand bias, with a population-level left-hand bias in hanging and sitting postures, but an almost symmetrical distribution in the tripedal posture. A left-hand bias was found for both sensory modalities, but the bias was stronger in the tactual tasks. Results suggest a possible right-hemisphere specialization in the rhesus for tactile, visual, or spatial processing.  相似文献   

17.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

18.
The development of posture and locomotion provides a valuable window for understanding the ontogeny of perception-action relations. In this study, 13 infants were examined cross-sectionally while standing quietly either hands-free or while lightly touching a contact surface. Mean sway amplitude results indicate that infants use light touch for sway attenuation (≈28–40%) as has been seen previously with adults (Jeka & Lackner, 1994). Additionally, while using the contact surface, movement patterns of the head and trunk show reduced temporal coordination (≈25–40%), as well as increased temporal variability, as compared to no touch conditions. These findings are discussed with regard to the ontogeny of perception-action relations, with the overall conclusion that infants use somatosensory information in an exploratory manner to aid in the development of an accurate internal model of upright postural control.  相似文献   

19.
In Experiment 1, participants were presented with pairs of stimuli (one visual and the other tactile) from the left and/or right of fixation at varying stimulus onset asynchronies and were required to make unspeeded temporal order judgments (TOJs) regarding which modality was presented first. When the participants adopted an uncrossed-hands posture, just noticeable differences (JNDs) were lower (i.e., multisensory TOJs were more precise) when stimuli were presented from different positions, rather than from the same position. This spatial redundancy benefit was reduced when the participants adopted a crossed-hands posture, suggesting a failure to remap visuotactile space appropriately. In Experiment 2, JNDs were also lower when pairs of auditory and visual stimuli were presented from different positions, rather than from the same position. Taken together, these results demonstrate that people can use redundant spatial cues to facilitate their performance on multisensory TOJ tasks and suggest that previous studies may have systematically overestimated the precision with which people can make such judgments. These results highlight the intimate link between spatial and temporal factors in determining our perception of the multimodal objects and events in the world around us.  相似文献   

20.
D-cycloserine (DCS), a partial NMDA receptor agonist, facilitates extinction of learned fear in rats and has been used to treat anxiety disorders in clinical populations. However, research into the effects of DCS on extinction is still in its infancy, with visual cues being the primary fear-eliciting stimuli under investigation. In both human and animal subjects odors have been found to associate strongly with aversive events. Therefore, this study examined the generality of the effects of DCS on extinction by testing odor cues. Sprague-Dawley rats were conditioned and extinguished to an odor using varying parameters, injected with either saline or DCS (15 mg/kg) following extinction, and then tested for a freezing response 24 h later. Experiment 1 demonstrated that after 3 odor-shock pairings, rats did not display short-term extinction and DCS had no effect on long-term extinction. Experiment 2 demonstrated that after 3 odor-noise pairings, rats displayed significant short-term extinction and DCS significantly facilitated long-term extinction. Following 2 odor-shock pairings in Experiment 3, half the rats displayed short-term extinction ("extinguishers") and half did not ("non-extinguishers"). DCS facilitated long-term extinction in the "extinguishers" condition but not in the "non-extinguishers" condition. In Experiment 4, following 2 odor-shock pairings and an extra extinction session, DCS had a significant facilitatory effect on long-term extinction. Thus, extinction of freezing to an odor cue was facilitated by systemic injections of DCS, but only when some amount of within-session extinction occurred prior to injection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号