首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We studied the influence of ship motion on postural activity during stance, varying stance width (the distance between the feet in side-by-side stance) and the difficulty of visual tasks. Participants (experienced crewmembers) were tested on land and then on successive days on a ship at sea in mild sea states. On land, we replicated classical effects of stance width and visual task on the magnitude of postural movement. The magnitude of forces used in postural control was greater at sea than on land. Visual performance at sea was comparable to performance on land. Both stance width and visual task difficulty influenced postural activity at sea. In addition, postural activity changed over days at sea. We conclude that experienced crewmembers modulated standing posture in support of the performance of visual tasks and that such effects occurred even in mild sea states. The overall pattern of effects is compatible with the hypothesis that postural activity is modulated, in part, in support of the performance of suprapostural tasks.  相似文献   

2.
It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development.  相似文献   

3.
We examined the influence of stance width (the distance between the feet) on postural sway and visually induced motion sickness. Stance width influences the magnitude of body sway, and changes in sway precede the subjective symptoms of motion sickness. Thus, manipulation of stance width may influence motion sickness incidence. Participants (healthy young adults) were exposed to complex, low-frequency oscillation of a moving room. Participants stood with their feet 5 cm, 17 cm, or 30 cm apart. During exposure to visual motion, the widest stance (30 cm) was associated with reduced incidence of motion sickness. For all stance widths, motion sickness was preceded by significant changes in motion of the head and torso. The results support the postural instability theory of motion sickness and suggest practical implications for the prevention of motion sickness. Adoption of wider stance may decrease the risk of motion sickness in operational situations.  相似文献   

4.
Sudden addition or removal of visual information can be particularly critical to balance control. The promptness of adaptation of stance control mechanisms is quantified by the latency at which body oscillation and postural muscle activity vary after a shift in visual condition. In the present study, volunteers stood on a force platform with feet parallel or in tandem. Shifts in visual condition were produced by electronic spectacles. Ground reaction force (center of foot pressure, CoP) and EMG of leg postural muscles were acquired, and latency of CoP and EMG changes estimated by t-tests on the averaged traces. Time-to-reach steady-state was estimated by means of an exponential model. On allowing or occluding vision, decrements and increments in CoP position and oscillation occurred within about 2 s. These were preceded by changes in muscle activity, regardless of visual-shift direction, foot position or front or rear leg in tandem. These time intervals were longer than simple reaction-time responses. The time course of recovery to steady-state was about 3 s, shorter for oscillation than position. The capacity of modifying balance control at very short intervals both during quiet standing and under more critical balance conditions speaks in favor of a necessary coupling between vision, postural reference, and postural muscle activity, and of the swiftness of this sensory reweighing process.  相似文献   

5.
We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   

6.
We investigated whether the “unity assumption,” according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1–3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the “unity assumption” in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   

7.
In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio‐visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal ‘race model’, as well as with a major recent extension of this model and a comparable ‘pooling’ (coactivation) model. The findings and analyses can reconcile results from previous audio‐visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age.  相似文献   

8.
Postural control in otolith disorders   总被引:1,自引:0,他引:1  
It was the aim of the present paper to investigate the influence of otolith disorders on human postural control by different methods. The 33 patients of our study had undergone a minor head injury and suffered subsequently from an utricular or sacculo-utricular disorder as evidenced by vestibular evoked myogenic potential recordings and eccentric rotation recordings of the otolith-ocular responses. Postural control was assessed by performing stance/gait tests (standard balance deficit test, SBDT) and by evaluating trunk sway (using angular velocity sensors). Moreover, classical tests of the posterior column of the spinal tract (Romberg/Unterberger) and the dynamic posturography (sensory organization test, SOT) were included. It could be shown that SBDT tasks with reduced proprioceptive and visual cues (e.g. standing on foam, eyes closed) are most sensitive for an otolith disorder. The patients showed an increased trunk sway in the pitch plane (i.e. linear motion as adequate utricular stimulus) and an increase in sway velocities (i.e. tilting movements as adequate saccular stimulus) compared to controls. The SOT was most sensitive for combined (sacculo-utricular) otolith disorders (78%) while vestibulospinal tests are not enough sensitive. In essence, otolith disorders evidently impair human postural control and have been possibly underestimated as a source of posttraumatic postural imbalance as yet.  相似文献   

9.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

10.
We translated a well-established laboratory paradigm to study sensory integration into a Head-Mounted-Display (HMD). In the current study, a group of 23 individuals with unilateral vestibular dysfunction and 16 age-matched controls observed moving spheres projected from the Oculus Rift. We confirmed increased visual weighting with an unstable surface and decreased visual weighting (i.e., reweighting) with increased visual amplitude. We did not observe significant differences in gains and phases between individuals with vestibular dysfunction and age-matched controls. The vestibular group increased sway in mid and high frequencies significantly more than controls with the change in surface or visual amplitude. Mild visual perturbations within HMDs carry the potential to become a useful portable assessment of postural control in individuals with vestibular disorders.  相似文献   

11.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

12.
ABSTRACT

Motor learning, in particular motor adaptation, is driven by information from multiple senses. For example, when arm control is faulty, vision, touch, and proprioception can all report on the arm's movements and help guide the adjustments necessary for correcting motor error. In recent years we have learned a lot about how the brain integrates information from multiple senses for the purpose of perception. However, less is known about how multisensory data guide motor learning. Most models of, and studies on, motor learning focus almost exclusively on the ensuing changes in motor performance without exploring the implications on sensory plasticity. Nor do they consider how discrepancies in sensory information (e.g., vision and proprioception) related to hand position may affect motor learning. Here, we discuss research from our lab and others that shows how motor learning paradigms affect proprioceptive estimates of hand position, and how even the mere discrepancy between visual and proprioceptive feedback can affect learning and plasticity. Our results suggest that sensorimotor learning mechanisms do not exclusively rely on motor plasticity and motor memory, and that sensory plasticity, in particular proprioceptive recalibration, plays a unique and important role in motor learning.  相似文献   

13.
Bimanual in-phase and anti-phase patterns were performed in the transverse plane under optimal and degraded proprioceptive conditions, i.e., without and with tendon vibration. Moreover, proprioceptive information was changed midway into each trial to examine on-line reorganization. In addition to the proprioceptive perturbation, the availability of visual information was manipulated to study to which degree sensory information from different modalities interact. Movement patterns performed under identical sensory conditions were compared, i.e., the first 15 s (control) and the 15 s following a change in afferent input (transfer). In the control and transfer conditions, movements with vibrations were less accurate than those without vibrations indicating the influence of optimal proprioceptive information in the calibration and recalibration of intrinsic bimanual movement patterns. Furthermore, pattern stability was affected by the nature of the transfer condition. This indicated that the degree of fluctuations in a sensory transfer situation depended upon the quality of the proprioceptive information experienced in the initial conditions. The influence of visual information was not without importance, although the nature of the coordination mode must be taken into account. In the control conditions, in-phase movements were less stable when vision was absent, whereas anti-phase movements were more stable when vision was not present. This observation was made independent of the available proprioceptive information revealing differences in visual guidance between both coordination modes. In the transfer conditions, pattern stability was similar during the vision and no-vision conditions suggesting a limited influence of visual information in the recalibration process.  相似文献   

14.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

15.
In an immersive visualization experiment, participants performed a conjunction search task while standing either in open (heels 10 cm apart, feet at a comfortable angle) or closed stance (feet pressed together). In the world-frame condition, the search display maintained its position in space as the participant swayed, generating optic flow informative about sway. In the head-frame condition, the display maintained constant distance and orientation with respect to the participant's head, providing no visual information about sway. In both conditions, participants (surprisingly) searched faster while in the more difficult closed stance. Interpretation of this result is unclear. Participants also swayed more as search-load increased, and made more errors in the high search-load condition. It is suggested that this performance tradeoff is a result of the sharing of a limited-capacity, modality-non-specific spatial-attentional resource between postural and suprapostural tasks.  相似文献   

16.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.  相似文献   

17.
Multisensory reweighting (MSR) is an adaptive process that prioritizes the visual, vestibular, and somatosensory inputs to provide the most reliable information for postural stability when environmental conditions change. This process is thought to degrade with increasing age and to be particularly deficient in fall-prone versus healthy older adults. In the present study, the authors investigate the dynamics of sensory reweighting, which is not well-understood at any age. Postural sway of young, healthy, and fall-prone older adults was measured in response to large changes in the visual motion stimulus amplitude within a trial. Absolute levels of gain, and the rate of adaptive gain change were examined when visual stimulus amplitude changed from high to low and from low to high. Compared with young adults, gains in both older adult groups were higher when the stimulus amplitude was high. Gains in the fall-prone elderly were higher than both other groups when the stimulus amplitude was low. Both older groups demonstrated slowed sensory reweighting over prolonged time periods when the stimulus amplitude was high. The combination of higher vision gains and slower down weighting in older adults suggest deficits that may contribute to postural instability.  相似文献   

18.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

19.
Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.  相似文献   

20.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号