首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments were conducted to investigate the impact of stimulus-driven control on the time-course of stimulus-response (SR) compatibility. Participants responded to the presence or absence of a singleton arrow that was presented among multiple nontargets. When the singleton arrow was present, observers pressed a button with their right index finger, when it was absent they pressed with their left-index finger. SR-compatibility depended on the relation between the identity of the target and the present response: Even though the identity of the target singleton arrow (whether it was pointing to the right or left) was irrelevant to the task, the direction could be corresponding (right arrow) or noncorresponding (left arrow) with a target present response (the right hand). To examine the time-course of performance target-distractor similarity was varied to increase or decrease visual search efficiency and accordingly response latency. There were three main findings. First, the results of Experiment 1 showed that observers were no faster to respond 'present' when the singleton arrow pointed to the right (corresponding to the right hand) than when it pointed left (noncorresponding to the right hand) in a simple present-absent detection task. Second, only when observers were encouraged to process the identity of the arrow singleton, an effect of an SR-compatibility effect was found which developed over time. Third, the time-course of SR-compatibility was not influenced by visual search efficiency. The results of the present work suggest that visual selection and response selection occur in different stages.  相似文献   

2.
Twenty-two normal right-handed subjects indicated with their index finger the midpoint of a horizontal rod that they could not see. Subjects performed this task while directing their gaze either centrally or toward four different locations (5 degrees or 30 degrees to the left or to the right of the midline). Results showed an overall leftward bias in rod bisection, which increased when subjects used their right hand and fixated a right-sided visual target. Thus, orienting of gaze can affect a nonvisual, tactilo-kinesthetic spatial task. The possible mechanisms of this interaction are discussed with respect to activation-orienting theories egocentric hypotheses and directional trends.  相似文献   

3.
The task‐irrelevant spatial location of a cue stimulus affects the processing of a subsequent target. This “Posner effect” has been explained by an exogenous attention shift to the spatial location of the cue, improving perceptual processing of the target. We studied whether the left/right location of task‐irrelevant and uninformative tones produces cueing effects on the processing of visual targets. Tones were presented randomly from left or right. In the first condition, the subsequent visual target, requiring response either with the left or right hand, was presented peripherally to left or right. In the second condition, the target was a centrally presented left/right‐pointing arrow, indicating the response hand. In the third condition, the tone and the central arrow were presented simultaneously. Data were recorded on compatible (the tone location and the response hand were the same) and incompatible trials. Reaction times were longer on incompatible than on compatible trials. The results of the second and third conditions are difficult to explain with the attention‐shift model emphasizing improved perceptual processing in the cued location, as the central target did not require any location‐based processing. Consequently, as an alternative explanation they suggest response priming in the hand corresponding to the spatial location of the tone. Simultaneous lateralized readiness potential (LRP) recordings were consistent with the behavioral data, the tone cues eliciting on incompatible trials a fast preparation for the incorrect response and on compatible trials preparation for the correct response.  相似文献   

4.
Since available evidence indicates that the two cerebral hemispheres are differentially sensitive to different types of stimulus information, and that they also utilize different strategies in processing information, is it possible that the two hemispheres are differentially sensitive to adaptation? Three groups of four subjects each were adapted to black and white gratings using three adapting durations: 500, 1,000, and 5,000 msec. Immediately following adaptation, a test grating was presented in either the left or right visual field. The task of the subject was to determine whether the lines of the adapting and test gratings had the same orientation or not. Analysis showed that in the 5,000-msec and 1,000-msec conditions, more errors occurred with left visual field presentations, responses to left visual field presentations took longer, and a bias-free measure showed that subjects were more sensitive to right visual field presentations. For the 500-msec group, there were no apparent differences between left and right visual fields presentations. The results indicate differential effects of adaptation on the two hemispheres, suggesting sensitivity differences between the two halves of the brain.  相似文献   

5.
Infant's manual laterality and eye-hand coordination emerge during the second part of the first year of life with the development of reaching. Nevertheless, little is known about the potential asymmetric characteristics of this coordination. The aim of this study was to describe visuo-spatial exploration in 6-month-old infants during reaching, according to the hand used. More specifically, we examined if the use of the left or the right hand was linked to a specific type of visual exploration. Gaze direction during goal-directed reaching towards an object placed on the table was measured with a remote ASL 504 eye tracker (Bedford MA). Twelve babies aged 6 months were observed during six reaching sessions, alterning three sessions with an object on the left side of the subject and three with an object on the right side. Gaze direction and some hand variables (hand activity, hand opening and hand position from the body) were coded with The Observer software. Results showed that babies visually explore their reaching space differently according to the hand used: they look more at the object when they use their right hand and more around the object when they use their left hand; they also look more often at their left hand than at their right one. These results suggest that an asymmetric visuo-manual coordination exists as early as 6 months: vision seems to support (1) left hand during reaching for evaluate distances from object to baby by means of visual feedbacks and (2) right hand for identify what sort of object is. Results are discussed in light of manual specialization and specific hemispheric skills at this age.  相似文献   

6.
We investigated the specific contribution of efferent information in a self-recognition task. Subjects experienced a passive extension of the right index finger, either as an effect of moving their left hand via a lever ('self-generated action'), or imposed externally by the experimenter ('externally-generated action'). The visual feedback was manipulated so that subjects saw either their own right hand ('view own hand' condition) or someone else's right hand ('view other's hand' condition) during the passive extension of the index finger. Both hands were covered with identical gloves, so that discrimination on the basis of morphological differences was not possible. Participants judged whether the right hand they saw was theirs or not. Self-recognition was significantly more accurate when subjects were themselves the authors of the action, even though visual and proprioceptive information always specified the same posture, and despite the fact that subjects judged the effect and not the action per se. When the passive displacement of the participants right index finger was externally generated, and only afferent information was available, self-recognition performance dropped to near-chance levels. Differences in performance across conditions reflect the distinctive contribution of efferent information to self-recognition, and argue against a dominant role of proprioception in self-recognition.  相似文献   

7.
In four experiments we examined the adaptation and aftereffect that resulted from a treatment yielding tactile/kinesthetic length discordance between the arms. Perceived discordance diminished with trials and tended to zero. Subsequent visual/tactile cross-modal judgments of distance showed the aftereffect to be a change in the perceived location of an unseen probed spot on each hand with respect to the location of a truly coincident visual marker. This occurred toward the body for the probed spot on one arm and away from the body on the other. There were three other main findings: (a) Arm movement was not a necessary condition for adaptation or aftereffect; (b) with intrinsic length information about the right arm present, but touch information from the right index finger absent during treatment, adaptation and aftereffect were abolished; (c) aftereffects of tactile location that were manifest at the hand and wrist tended to zero when a point close to the elbow was tested with a cross-modal procedure. The experiments provide evidence that the mapping of the tactile sheet onto an internal length domain had been modified by the treatment. The sensory consequences of the treatment led many subjects to report spontaneously that their arms felt to be of different lengths.  相似文献   

8.
The literature concerning adaptation to prism indicates that several adaptive mechanisms may be important. The particular mechanism or mechanisms involved depends (at least in part) upon the type of adaptive exposure. In the present study. three adaptive mechanisms (cognitive. oculomotor, and motor-kinesthetic) were investigated. Ss were asked to point in the dark at an illuminated target. The target was seen displaced from its veridical position due to a wedge prism placed before S’s right eye. The left eye was occluded. Ss then viewed their visual target pointing errors through the displacing prism without seeing any part of their bodies. One group of Ss was instructed to ignore these prism-induced errors and to continue pointing at the target’s visual position. A second group of Ss was instructed to compensate fully for their errors and to at tempt to eliminate them on all future trials. For the latter group errors were completely eliminated, while for Ss instructed to ignore their errors, relatively small improvement in visual target settings occurred. This improvement was called cognitive adaptation, since it depended on the S’s conscious control. In addition. for both conditions. evidence was found that allowing Ss to view their prism-induced pointing errors resulted in some form of motor-kinesthetic adaptation. This adaptation was hypothesized to represent a change in the judged position of the pointing hand relative to its felt position. It was concluded that this motor-kinesthetic adaptation was dependent, in part, upon cognitive information concerning the effects of the prism and that it serves to reduce conflict between cognitive and visual cues, i.e., between what S believes and what he sees.  相似文献   

9.
The furthest distance that is judged to be reachable can change after participants have used a tool or if they are led to misjudge the position of their hand. Here we investigated how judged reachability changed when visual feedback about the hand was shifted. We hoped to distinguish between various ways in which visuomotor adaptation could influence judged reachability. Participants had to judge whether they could reach a virtual cube without actually doing so. They indicated whether they could reach this virtual cube by moving their hand. During these hand movements, visual feedback about the position of the hand was shifted in depth, either away from or toward the participant. Participants always adapted to the shifted feedback. In a session in which the hand movements in the presence of visual feedback were mainly in depth, perceived reachability shifted in accordance with the feedback (more distant cubes were judged to be reachable when feedback was shifted further away). In a second session in which the hand movements in the presence of visual feedback were mainly sideways, for some participants perceived reachability shifted in the opposite direction than we expected. The shift in perceived reachability was not correlated with the adaptation to the shift in visual feedback. We conclude that reachability judgments are not directly related to visuomotor adaptation.  相似文献   

10.
B Heath  G Ettlinger  J V Brown 《Perception》1988,17(4):535-547
In order to evaluate the importance of the axis of stimulus presentation, inter- and intramanual recognition of mirror pairs was studied with the stimulus materials aligned along the front/back axis (whereas in previous work the mirror pairs were aligned along the left/right axis). Children were allowed to feel shapes with the whole hand, with only four fingers (excluding the thumb), or with only the index finger. After learning with one hand, recognition was tested in experiment 1 with the other hand; after learning with one orientation of the hand (palm down or up), recognition was tested in experiment 2 with the other orientation (palm up or down) of the same hand; after learning with one coronal alignment of the hand (to the left or right), recognition was tested in experiment 3 with the other alignment (to the right or left), but without rotation, of the same hand. Significantly fewer intermanual recognition errors were made on mirror pairs with the materials oriented along the front/back axis than in previous work when oriented along the left/right axis. This supports the suggestion that such errors arise when the stimuli are oriented along the left/right axis during formation of the memory trace. The same trend was unexpectedly obtained for intramanual recognition errors (after rotation of the hand). These errors (after hand rotation) are largely due to coding with respect to the hand; they are reduced when the hand is not aligned with the body axis, since then coding can also occur in relation to the environment.  相似文献   

11.
The visual system has the remarkable ability to generalize across different viewpoints and exemplars to recognize abstract categories of objects, and to discriminate between different viewpoints and exemplars to recognize specific instances of particular objects. Behavioral experiments indicate the critical role of the right hemisphere in specific-viewpoint and -exemplar visual form processing and the left hemisphere in abstract-viewpoint and -exemplar visual form processing. Neuroimaging studies indicate the role of fusiform cortex in these processes, however results conflict in their support of the behavioral findings. We investigated this inconsistency in the present study by examining adaptation across viewpoint and exemplar changes in the functionally defined fusiform face area (FFA) and in fusiform regions exhibiting adaptation. Subjects were adapted to particular views of common objects and then tested with objects appearing in four critical conditions: same-exemplar, same-viewpoint adapted, same-exemplar, different-viewpoint adapted, different-exemplar adapted, and not adapted. In line with previous results, the FFA demonstrated a release from neural adaptation for repeated different viewpoints and exemplars of an object. In contrast to previous work, a (non-FFA) right medial fusiform area also demonstrated a release from neural adaptation for repeated different viewpoints and exemplars of an object. Finally, a left lateral fusiform area demonstrated neural adaptation for repeated different viewpoints, but not exemplars, of an object. Test-phase task demands did not affect adaptation in these regions. Together, results suggest that dissociable neural subsystems in fusiform cortex support the specific identification of a particular object and the abstract recognition of that object observed from a different viewpoint. In addition, results suggest that areas within fusiform cortex do not support abstract recognition of different exemplars of objects within a category.  相似文献   

12.
Experiments designed to check the absence of effects for hands and handedness in simple and two-choice reaction time found unexpected individual differences related to stimulus laterality. The majority of subjects responded faster to the stimulus on the left and a substantial minority responded faster to the stimulus on the right in any choice pair. The right index finger was slower than the left index or the middle fingers. Choices tended to be faster between fingers on different hands than on the same hand and same-hand choices were faster with the left hand than the right hand. There were no effects attributable to hand preference or sex.  相似文献   

13.
The right hand advantage has been thought to arise from the greater efficiency of the right hand/left hemisphere system in processing visual feedback information. This hypothesis was examined using kinematic analyses of aiming performance, focusing particularly on time after peak velocity which has been shown to be sensitive to visual feedback processing demands. Eight right-handed subjects pointed at two targets with their left and right hands with or without vision available and either as accurately or as fast as possible. Pointing errors and movement time were found to be smaller with the right hand. Analyses of the temporal componenets of movement time revealed that the hands differed only in time after peak velocity (in deceleration), with the right hand spending significantly less time. This advantage for the right hand, however, was apparent whether or not vision was available and only when accuracy was emphasized in performance. These findings suggest that the right hand system may be more efficient at processing feedback information whether this be visual or nonvisual (e.g., proprioceptive).  相似文献   

14.
The present paper reviews data from two previous studies in our laboratory, as well as some additional new data, on the neuronal representation of movement and pain imagery in a subject with an amputated right arm. The subject imagined painful and non-painful finger movements in the amputated stump while being in a MRI scanner, acquiring EPI-images for fMRI analysis. In Study I (Ersland et al., 1996) the Subject alternated tapping with his intact left hand fingers and imagining "tapping" with the fingers of his amputated right arm. The results showed increased neuronal activation in the right motor cortex (precentral gyrus) when tapping with the fingers of the left hand, and a corresponding activation in the left motor cortex when imagining tapping with the fingers of the amputated right arm. Finger tappings of the intact left hand fingers also resulted in a larger activated precentral area than imagery "finger tapping" of the amputated right arm fingers. In Study II (Rosen et al., 2001 in press) the same subject imagining painful and pleasurable finger movements, and still positions of the fingers of the amputated arm. The results showed larger activations over the motor cortex for movement imagining versus imagining the hand being in a still position, and larger activations over the sensory cortex when imagining painful experiences. It can therefore be concluded that not only does imagery activate the same motor areas as real finger movements, but also that adding instructions of pain together with imaging moving the fingers intensified the activation compared with adding instructions about non-painful experiences. From these studies, it is clear that areas activated during actual motor execution to a large extent also are activated during mental imagery of the same motor commands. In this respect the present studies add to studies of visual imagery that have shown a similar correspondence in activation between actual object perception and imagery of the same object.  相似文献   

15.
Based on the observation that bimanual finger tapping movements tend toward mirror symmetry with respect to the body midline, despite the synchronous activation of non-homologous muscles, F. Mechsner, D. Kerzel, G. Knoblich, and W. Prinz (2001) [Perceptual basis of bimanual coordination. Nature, 414, 69-73] suggested that the basis of rhythmic coordination is purely spatial/perceptual in nature, and independent of the neuro-anatomical constraints of the motor system. To investigate this issue further, we employed a four finger tapping task similar to that used by F. Mechsner and G. Knoblich (2004) [Do muscle matter in bimanual coordination? Journal of Experimental Psychology: Human Perception and Performance, 30, 490-503] in which six male participants were required to alternately tap combinations of adjacent pairs of index (I), middle (M) and ring (R) fingers of each hand in time with an auditory metronome. The metronome pace increased continuously from 1 Hz to 3 Hz over the course of a 30-s trial. Each participant performed three blocks of trials in which finger combination for each hand (IM or MR) and mode of coordination (mirror or parallel) were presented in random order. Within each block, the right hand was placed in one of three orientations; prone, neutral and supine. The order of blocks was counterbalanced across the six participants. The left hand maintained a prone position throughout the experiment. On the basis of discrete relative phase analyses between synchronised taps, the time at which the initial mode of coordination was lost was determined for each trial. When the right hand was prone, transitions occurred only from parallel symmetry to mirror symmetry, regardless of finger combination. In contrast, when the right hand was supine, transitions occurred only from mirror symmetry to parallel but no transitions were observed in the opposite direction. In the right hand neutral condition, mirror and parallel symmetry are insufficient to describe the modes of coordination since the hands are oriented orthogonally. When defined anatomically, however, the results in each of the three right hand orientations are consistent. That is, synchronisation of finger tapping is determined by a hierarchy of control of individual fingers based on their intrinsic neuro-mechanical properties rather than on the basis of their spatial orientation.  相似文献   

16.
The "body image" is a putative mental representation of one's own body, including structural and geometric details, as well as the more familiar visual and affective aspects. Very little research has investigated how we learn the structure of our own body, with most researchers emphasising the canonical visual representation of the body when we look at ourselves in a mirror. Here, we used non-visual self-touch in healthy participants to investigate the possibility that primary sensorimotor experience may influence cognitive representations of one's own body structure. Participants used the fingers of one hand (the 'active' hand), to touch the fingers of the other (the 'passive' hand). A conflict between the experience of the active and passive hand was introduced by experimenter interleaving their fingers with the fingers of the participant's passive hand. This led to the active hand experiencing that it touched more fingers than the passive hand felt it was being touched by. The effects on representation of body structure were assessed using an implicit measure based on Kinsbourne and Warrington's 'in-between task'. We found an underestimation of the number of fingers in the central part of the hand specifically linked to the experience of self-touch. This pattern of results corresponds to the experience of the passive hand, but not the active hand. Nevertheless, comparable reorganisation of fingers within the hand representation was found for both active and passive hands. We show that primary sensorimotor experience can modify the representation of body structure. This modification is driven by the passive experience of being touched, rather than by the active experience of touching. We believe this is the first experimental study of effects of self-touch on the mental representation of the body.  相似文献   

17.
Using a reaction time experiment, we examined whether imagining a response would lead to an increase in the frequency of its execution. During a pre-test and a post-test, participants had to respond as quickly as possible with either their left or their right hand, as they preferred, to the illumination of one of 17 target positions arrayed in front of them in a semicircle. Between these two phases, participants performed a practice condition. Each of 40 right-handed participants was assigned to one of four groups that differed in their practice condition: One group made only dominant-hand responses to all target locations, two imagery groups imagined dominant hand responses to all target locations, and the last group received a no-practice, control task. One imagery group received instructions emphasizing that imagery has a strong effect; the second group received instructions suggesting that imagery was not effective. The results showed an increased incidence of the practised response for both imagery groups during the post-test. No effect was found for the physical performance group and the control group. The change in performance for the imagery groups was not accompanied by a change in reaction time. The results are discussed in terms of imagining the realization of action possibilities and from a neuropsychological point of view.  相似文献   

18.
Psychophysical techniques were used to examine how subpopulations of visual neurons varying in their ocular dominance interacted in determining performance on a visual task. Using an asymmetric alternating adaptation of the left and right eyes, we manipulated the sensitivity of monocularly driven neurons while keeping the sensitivity of binocularly driven neurons constant. Relative threshold elevations were measured in the left eye, right eye, and both eyes of five observers following different ratios of alternating adaptation. It was found that whereas monocularly measured aftereffects varied monotonically as a function of the adaptation duration of the measured eye, the magnitude of the binocularly measured aftereffect remained constant regardless of how the adaptation was divided between the two eyes. This suggests that neurons differing in their ocular dominance pool their activity in determining sensitivity to a test target.  相似文献   

19.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

20.
Prism adaptation involves a proprioceptive, a visual and a motor component. As the existing paradigms are not able to distinguish between these three components, the contribution of the proprioceptive component remains unclear. In the current study, a proprioceptive judgement task, in the absence of motor responses, was used to investigate how prism adaptation would specifically influences the felt position of the hands in healthy participants. The task was administered before and after adaptation to left and right displacing prisms using either the left or the right hand during the adaptation procedure. The results appeared to suggest that the prisms induced a drift in the felt position of the hands, although the after‐effect depended on the combination of the pointing hand and the visual deviation induced by prisms. The results are interpreted as in line with the hypothesis of an asymmetrical neural architecture of somatosensory processing. Moreover, the passive proprioception of the hand position revealed different effects of proprioceptive re‐alignment compared to active pointing straight ahead: different mechanisms about how visuo‐proprioceptive discrepancy is resolved were hypothesized.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号