首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A significant challenge in developing spatial representations for the control of action is one of multisensory integration. Specifically, we require an ability to efficiently integrate sensory information arriving from multiple modalities pertaining to the relationships between the acting limbs and the nearby external world (i.e. peripersonal space), across changes in body posture and limb position. Evidence concerning the early development of such spatial representations points towards the independent emergence of two distinct mechanisms of multisensory integration. The earlier-developing mechanism achieves spatial correspondence by representing body parts in their typical or default locations, and the later-developing mechanism does so by dynamically remapping the representation of the position of the limbs with respect to external space in response to changes in postural information arriving from proprioception and vision.  相似文献   

2.
This study addressed the role of proprioceptive and visual cues to body posture during the deployment of tactile spatial attention. Participants made speeded elevation judgments (up vs. down) to vibrotactile targets presented to the finger or thumb of either hand, while attempting to ignore vibrotactile distractors presented to the opposite hand. The first two experiments established the validity of this paradigm and showed that congruency effects were stronger when the target hand was uncertain (Experiment 1) than when it was certain (Experiment 2). Varying the orientation of the hands revealed that these congruency effects were determined by the position of the target and distractor in external space, and not by the particular skin sites stimulated (Experiment 3). Congruency effects increased as the hands were brought closer together in the dark (Experiment 4), demonstrating the role of proprioceptive input in modulating tactile selective attention. This spatial modulation was also demonstrated when a mirror was used to alter the visually perceived separation between the hands (Experiment 5). These results suggest that tactile, spatially selective attention can operate according to an abstract spatial frame of reference, which is significantly modulated by multisensory contributions from both proprioception and vision.  相似文献   

3.
Spence C  Walton M 《Acta psychologica》2005,118(1-2):47-70
We investigated the extent to which people can selectively ignore distracting vibrotactile information when performing a visual task. In Experiment 1, participants made speeded elevation discrimination responses (up vs. down) to a series of visual targets presented from one of two eccentricities on either side of central fixation, while simultaneously trying to ignore task-irrelevant vibrotactile distractors presented independently to the finger (up) vs. thumb (down) of either hand. Participants responded significantly more slowly, and somewhat less accurately, when the elevation of the vibrotactile distractor was incongruent with that of the visual target than when they were presented from the same (i.e., congruent) elevation. This crossmodal congruency effect was significantly larger when the visual and tactile stimuli appeared on the same side of space than when they appeared on different sides, although the relative eccentricity of the two stimuli within the hemifield (i.e., same vs. different) had little effect on performance. In Experiment 2, participants who crossed their hands over the midline showed a very different pattern of crossmodal congruency effects to participants who adopted an uncrossed hands posture. Our results suggest that both the relative external location and the initial hemispheric projection of the target and distractor stimuli contribute jointly to determining the magnitude of the crossmodal congruency effect when participants have to respond to vision and ignore touch.  相似文献   

4.
Behavioral studies of multisensory integration in motion perception have focused on the particular case of visual and auditory signals. Here, we addressed a new case: audition and touch. In Experiment 1, we tested the effects of an apparent motion stream presented in an irrelevant modality (audition or touch) on the perception of apparent motion streams in the other modality (touch or audition, respectively). We found significant congruency effects (lower performance when the direction of motion in the irrelevant modality was incongruent with the direction of the target) for the two possible modality combinations. This congruency effect was asymmetrical, with tactile motion distractors having a stronger influence on auditory motion perception than vice versa. In Experiment 2, we used auditory motion targets and tactile motion distractors while participants adopted one of two possible postures: arms uncrossed or arms crossed. The effects of tactile motion on auditory motion judgments were replicated in the arms-uncrossed posture, but they dissipated in the arms-crossed posture. The implications of these results are discussed in light of current findings regarding the representation of tactile and auditory space.  相似文献   

5.
Experiment 1 investigated whether tool use can expand the peripersonal space into the very far extrapersonal space. Healthy participants performed line bisection in peripersonal and extrapersonal space using wooden sticks up to a maximum of 240 cm. Participants misbisected to the left of the true midpoint, both for lines presented in peripersonal and for those presented in extrapersonal space, confirming a peripersonal space expansion up to a distance of 240 cm. Experiment 2 investigated whether arm position could influence the perception of peripersonal and extrapersonal space during tool use. Participants performed line bisection in the peripersonal and in the extrapersonal space (up to a maximum of 120 cm) using wooden sticks in two different conditions: either with the arm bent or with the arm stretched. Results showed stronger pseudoneglect in the stretched arm condition.  相似文献   

6.
In mirror reflections, visual stimuli in near peripersonal space (e.g., an object in the hand) can project the retinal image of far, extrapersonal stimuli "beyond" the mirror. We studied the interaction of such visual reflections with tactile stimuli in a cross-modal congruency task. We found that visual distractors produce stronger interference on tactile judgments when placed close to the stimulated hand, but observed indirectly as distant mirror reflections, than when directly observed in equivalently distant far space, even when in contact with a dummy hand or someone else's hand in the far location. The stronger visual-tactile interference for the mirror condition implies that near stimuli seen as distant reflections in a mirror view of one's own hands can activate neural networks coding peripersonal space, because these visual stimuli are coded as having a true source near to the body.  相似文献   

7.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.  相似文献   

8.
Berti  Anna 《Cognitive processing》2021,22(1):121-126

Years ago, it was demonstrated (e.g., Rizzolatti et al. in Handbook of neuropsychology, Elsevier Science, Amsterdam, 2000) that the brain does not encode the space around us in a homogeneous way, but through neural circuits that map the space relative to the distance that objects of interest have from the body. In monkeys, relatively discrete neural systems, characterized by neurons with specific neurophysiological responses, seem to be dedicated either to represent the space that can be reached by the hand (near/peripersonal space) or to the distant space (far/extrapersonal space). It was also shown that the encoding of spaces has dynamic aspects because they can be remapped by the use of tools that trigger different actions (e.g., Iriki et al. 1998). In this latter case, the effect of the tool depends on the modulation of personal space, that is the space of our body. In this paper, I will review and discuss selected research, which demonstrated that also in humans: 1 spaces are encoded in a dynamic way; 2 encoding can be modulated by the use of tool that the system comes to consider as parts of the own body; 3 body representations are not fixed, but they are fragile and subject to change to the point that we can incorporate not only the tools necessary for action, but even limbs belonging to other people. What embodiment of tools and of alien limb tell us about body representations is then briefly discussed.

  相似文献   

9.
It is known that number and space representations are connected to one another in numerical and arithmetic abilities. Numbers are represented using the metaphor of a mental number line, oriented along horizontal and vertical space. This number line also seems to be linked to mental arithmetic, which is based partly on arithmetic fact retrieval. It seems that number representation and mental arithmetic are linked together. The present study tested the effect of spatial contextual congruency between stimulus presentation and response key arrangements in arithmetic fact retrieval, using number-matching and addition verification tasks. For both tasks in Experiment 1, a contextual congruency effect was present horizontally (i.e., horizontal presentation of stimuli and horizontal response key alignments) but not vertically (i.e., vertical presentation of stimuli but horizontal response key alignments). In Experiment 2, both tasks showed a contextual congruency effect for both spatial conditions. Experiment 1 showed that the interference and distance effects were found in the horizontal condition, probably because of the spatial congruency between stimulus presentation and response key arrangements. This spatial congruency could be related to the activation of the horizontal number line. Experiment 2 showed similar interference and distance effects for both spatial conditions, suggesting that the congruency between stimulus presentation and response alignment could facilitate the retrieval of arithmetic facts. This facilitation could be related to the activation of both horizontal and vertical number lines. The results are discussed in light of the possible role of a mental number line in arithmetic fact retrieval.  相似文献   

10.
Two experiments investigated infants' ability to localize tactile sensations in peripersonal space. Infants aged 10 months (Experiment 1) and 6.5 months (Experiment 2) were presented with vibrotactile stimuli unpredictably to either hand while they adopted either a crossed- or uncrossed-hands posture. At 6.5 months, infants' responses were predominantly manual, whereas at 10 months, visual orienting behavior was more evident. Analyses of the direction of the responses indicated that (a) both age groups were able to locate tactile stimuli, (b) the ability to remap visual and manual responses to tactile stimuli across postural changes develops between 6.5 and 10 months of age, and (c) the 6.5-month-olds were biased to respond manually in the direction appropriate to the more familiar uncrossed-hands posture across both postures. The authors argue that there is an early visual influence on tactile spatial perception and suggest that the ability to remap visual and manual directional responses across changes in posture develops between 6.5 and 10 months, most likely because of the experience of crossing the midline gained during this period.  相似文献   

11.
Earlier studies have demonstrated that spatial cueing differentially reduces stimulus-stimulus congruency (e.g., spatial Stroop) interference but not stimulus-response congruency (e.g., Simon; e.g., Lupiá?ez & Funes, 2005). This spatial cueing modulation over spatial Stroop seems to be entirely attributable to object-based attention (e.g., Luo, Lupiá?ez, Funes, & Fu, 2010). In the present study, two experiments were conducted to further explore whether the cueing modulation of spatial Stroop is object based and/or space based and to analyse the "locus" of this modulation. In Experiment 1, we found that the cueing modulation over spatial Stroop is entirely object based, independent of stimulus-response congruency. In Experiment 2, we observed that the modulation of object-based attention over the spatial Stroop only occurred at a short cue-target interval (i.e., stimulus onset asynchrony; SOA), whereas the stimulus-response congruency effect was not modulated either by object-based or by location-based attentional cueing. The overall pattern of results suggests that the spatial cueing modulation over spatial Stroop arises from object-based attention and occurs at the perceptual stage of processing.  相似文献   

12.
The aim of this study was to explore the role of motor resources in peripersonal space encoding: are they intrinsic to spatial processes or due to action potentiality of objects? To answer this question, we disentangled the effects of motor resources on object manipulability and spatial processing in peripersonal and extrapersonal spaces. Participants had to localize manipulable and non-manipulable 3-D stimuli presented within peripersonal or extrapersonal spaces of an immersive virtual reality scenario. To assess the contribution of motor resources to the spatial task a motor interference paradigm was used. In Experiment 1, localization judgments were provided with the left hand while the right dominant arm could be free or blocked. Results showed that participants were faster and more accurate in localizing both manipulable and non-manipulable stimuli in peripersonal space with their arms free. On the other hand, in extrapersonal space there was no significant effect of motor interference. Experiment 2 replicated these results by using alternatively both hands to give the response and controlling the possible effect of the orientation of object handles. Overall, the pattern of results suggests that the encoding of peripersonal space involves motor processes per se, and not because of the presence of manipulable stimuli. It is argued that this motor grounding reflects the adaptive need of anticipating what may happen near the body and preparing to react in time.  相似文献   

13.
Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near‐space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top‐down processing in congruency effects from embodied representations of spatial terms.  相似文献   

14.
Summary Two experiments are reported in which we examined the hypothesis that the advantage of the right hand in target aiming arises from differences in impulse variability. Subjects made aiming movements with the left and right hands. The force requirements of the movements were manipulated through the addition of mass to the limb (Experiments 1 and 2) and through control of movement amplitude (Experiment 1). Although the addition of mass diminished performance (i. e., it increased movement times in Experiment 1 and increased error in Experiment 2), the two hands were not differently affected by the manipulation of required force. In spite of the fact that the right hand exhibited enhanced performance (i. e., lower movement times in Experiment 1 and greater accuracy in Experiment 2), these advantages were not reflected in kinematic measures of impulse variability.We are grateful to an anonymous reviewer for clarification of this distinction.  相似文献   

15.
When two limbs are required to move different distances simultaneously, assimilation effects are shown: The shorter distance limb tends to overshoot the target, whereas the longer distance limb undershoots. The effect of practice on assimilation effects was studied in two experiments, using a simultaneous four-limb aiming task. When subjects were required to move their left limbs a shorter distance than the right (5 cm vs. 9 cm), the right limbs moved a lesser distance and had greater variable and overall errors relative to a group required to move all limbs the same distance (9 cm). Practice reduced assimilation effects in the lower limbs, but spatial assimilations were present throughout 125 acquisition trials with KR and 50 no-KR transfer trials, spanning 24 hours. When the upper limbs were required to move a greater distance than the lower limbs (15 cm vs. 9 cm), the lower limbs showed longer distances and increased overall errors early in practice compared to the lower limbs of a group required to move all limbs 9 cm. With practice, the between-group differences decreased, with no assimilation effects shown on the transfer trials. The results suggest that neural crosstalk is greater between left and right sides than between upper and lower limbs. Results are discussed in light of the functional cerebral space model of simultaneous actions.  相似文献   

16.
When two limbs are required to move different distances simultaneously, assimilation effects are shown: The shorter distance limb tends to overshoot the target, whereas the longer distance limb undershoots. The effect of practice on assimilation effects was studied in two experiments, using a simultaneous four-limb aiming task. When subjects were required to move their left limbs a shorter distance than the right (5 cm vs. 9 cm), the right limbs moved a lesser distance and had greater variable and overall errors relative to a group required to move all limbs the same distance (9 cm). Practice reduced assimilation effects in the lower limbs, but spatial assimilations were present throughout 125 acquisition trials with KR and 50 no-KR transfer trials, spanning 24 hours. When the upper limbs were required to move a greater distance than the lower limbs (15 cm vs. 9 cm), the lower limbs showed longer distances and increased overall errors early in practice compared to the lower limbs of a group required to move all limbs 9 cm. With practice, the between-group differences decreased, with no assimilation effects shown on the transfer trials. The results suggest that neural crosstalk is greater between left and right sides than between upper and lower limbs. Results are discussed in light of the functional cerebral space model of simultaneous actions.  相似文献   

17.
Conceptual metaphor is ubiquitous in language and thought, as we usually reason and talk about abstract concepts in terms of more concrete ones via metaphorical mappings that are hypothesized to arise from our embodied experience. One pervasive example is the conceptual projection of valence onto space, which flexibly recruits the vertical and lateral spatial frames to gain structure (e.g., good is up ‐bad is down and good is right ‐bad is left ). In the current study, we used a valence judgment task to explore the role that exogenous bodily cues (namely response hand positions) play in the allocation of spatial attention and the modulation of conceptual congruency effects. Experiment 1 showed that congruency effects along the vertical axis are weakened when task conditions (i.e., the use of vertical visual cues, on the one hand, and the horizontal alignment of responses, on the other) draw attention to both the vertical and lateral axes making them simultaneously salient. Experiment 2 evidenced that the vertical alignment of participants’ hands while responding to the task—regardless of the location of their dominant hand—facilitates the judgment of positive and negative‐valence words, as long as participants respond in a metaphor–congruent manner (i.e., up responses are good and down responses are bad). Overall, these results support the claim that source domain representations are dynamically activated in response to the context and that bodily states are an integral part of that context.  相似文献   

18.
The participants in this study discriminated the position of tactile target stimuli presented at the tip or the base of the forefinger of one of the participants’ hands, while ignoring visual distractor stimuli. The visual distractor stimuli were presented from two circles on a display aligned with the tactile targets in Experiment 1 or orthogonal to them in Experiment 2. Tactile discrimination performance was slower and less accurate when the visual distractor stimuli were presented from incongruent locations relative to the tactile target stimuli (e.g., tactile target at the base of the finger with top visual distractor) highlighting a cross-modal congruency effect. We examined whether the presence and orientation of a simple line drawing of a hand, which was superimposed on the visual distractor stimuli, would modulate the cross-modal congruency effects. When the tactile targets and the visual distractors were spatially aligned, the modulatory effects of the hand picture were small (Experiment 1). However, when they were spatially misaligned, the effects were much larger, and the direction of the cross-modal congruency effects changed in accordance with the orientation of the picture of the hand, as if the hand picture corresponded to the participants’ own stimulated hand (Experiment 2). The results suggest that the two-dimensional picture of a hand can modulate processes maintaining our internal body representation. We also observed that the cross-modal congruency effects were influenced by the postures of the stimulated and the responding hands. These results reveal the complex nature of spatial interactions among vision, touch, and proprioception.  相似文献   

19.
Emotional stimuli receive prioritized attentional and motoric processing in the brain. Recent data have indicated that emotional stimuli enhance activity in the cervical spinal cord as well. In the present study, we used fMRI to investigate the specificity of this emotion-dependent spinal cord activity. We examined whether the limb depicted in a passively viewed image (upper vs. lower) differentially influenced activity in the cervical segments that innervate the upper limbs, and whether this effect was enhanced by emotion. Participants completed four fMRI runs: neutral–upper limb, neutral–lower limb, negative–upper limb, and negative–lower limb. The results indicated main effects of limb and emotion, with upper limbs and negative stimuli eliciting greater activity than lower limbs and neutral stimuli, respectively. For upper-limb runs, negative stimuli evoked more activity than did neutral stimuli. Additionally, negative stimuli depicting upper limbs produced stronger responses than did negative stimuli depicting lower limbs. These results suggest that emotional stimuli augment limb-specific responses in the spinal cord.  相似文献   

20.
It is known that young adults (YA) circumvent pedestrians differently than inanimate obstacles and that limb movements of the pedestrian influence minimum clearance for predictable pedestrian paths. Although older adults (OA) use more cautious strategies for general pedestrian avoidance compared to YA, how pedestrian movements influence circumvention by OAs is unknown. The aim of this study was to understand how limb movements of a pedestrian with an initially unpredictable trajectory affect circumvention control in younger vs older healthy adults. Fourteen YA and 14 OA (> 70 years) were immersed in a virtual shopping mall and instructed to circumvent a virtual pedestrian (VP) approaching with either normal locomotor movements, upper limbs fixed, lower limbs fixed, or both upper and lower limbs fixed. Onset distance for trajectory deviation, minimum clearance, walking speed, body segment yaw angles and gaze behaviour were analysed. When the VP lacked local limb movements, both age groups initiated their trajectory deviations farther away, but significantly more so for OA. Minimal clearance was unchanged across conditions and similar for both age groups. OA walked slower, produced smaller head and trunk yaw, and visually focused on the VP for a greater percentage of time. Thus, lack of limb movements of another pedestrian resulted in more cautious circumvention control and OA needed more time to process visual information with greater visual attention focused on the VP. Age-related changes could translate to a greater risk of falls in OA populations with reduced balance and mobility that could limit community ambulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号