首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Human movement science》1999,18(2-3):281-305
Eight right-handed participants performed a bilateral circle tracing task in symmetric or asymmetric patterns. Circle tracing was performed in synchrony with an auditory metronome and a visual display at, or comfortably below, each participant's transition frequency. The visual display consisted of a row of five light-emitting diodes (LEDs) arranged between the two circles (hands). Bimanual pattern stability was examined under conditions where the direction of illumination of the visual stimuli was compatible or incompatible with the hand direction. Symmetric patterns maintained stability for both movement rates whereas asymmetric patterns exhibited loss of stability at the transition frequency. Spontaneous reversals in circling direction occurred predominantly (94%) through the nondominant hand. Laterality effects were also evident in the aspect ratio (circularity of trajectories) and limb frequency variation, particularly in asymmetric patterns at the transition frequency. Compatibility between the stimulus direction and circling direction served to: stabilise symmetric patterns; stabilise asymmetric patterns by delaying the onset of transition; and stabilise the individual limb dynamics when the direction of the dominant side was compatible with the visual stimulus. The data from this multisegmental task lend support to a model of coupled oscillators whereby the coupling strength is anisotropic between the dominant and nondominant side, and lend further support for an account of manual asymmetries by way of a preferential perception–action coupling through the dominant limb. PsycINFO Classification: 2320  相似文献   

2.
Altered vision near the hands   总被引:1,自引:0,他引:1  
Abrams RA  Davoli CC  Du F  Knapp WH  Paull D 《Cognition》2008,107(3):1035-1047
The present study explored the manner in which hand position may affect visual processing. We studied three classic visual attention tasks (visual search, inhibition of return, and attentional blink) during which the participants held their hands either near the stimulus display, or far from the display. Remarkably, the hands altered visual processing: people shifted their attention between items more slowly when their hands were near the display. The same results were observed for both visible and invisible hands. This enhancement in vision for objects near the hands reveals a mechanism that could facilitate the detailed evaluation of objects for potential manipulation, or the assessment of potentially dangerous objects for a defensive response.  相似文献   

3.
Previous research has revealed that a stimulus presented in the blind visual field of participants with visual hemifield defects can evoke oculomotor competition, in the absence of awareness. Here we studied three cases to determine whether a distractor in a blind hemifield would be capable of inducing a global effect, a shift of saccade endpoint when target and distractor are close to each other, in participants with lesions of the optic radiations or striate cortex. We found that blind field distractors significantly shifted saccadic endpoints in two of three participants with lesions of either the striate cortex or distal optic radiations. The direction of the effect was paradoxical, however, in that saccadic endpoints shifted away from blind field distractors, whereas endpoints shifted towards distractors in the visible hemifields, which is the normal global effect. These results provide further evidence that elements presented in the blind visual field can generate modulatory interactions in the oculomotor system, which may differ from interactions in normal vision.  相似文献   

4.
ABSTRACT

Interacting with other people is a ubiquitous part of daily life. A complex set of processes enable our successful interactions with others. The present research was conducted to investigate how the processing of visual stimuli may be affected by the presence and the hand posture of a co-actor. Experiments conducted with participants acting alone have revealed that the distance from the stimulus to the hand of a participant can alter visual processing. In the main experiment of the present paper, we asked whether this posture-related source of visual bias persists when participants share the task with another person. The effect of personal and co-actor hand-proximity on visual processing was assessed through object-specific benefits to visual recognition in a task performed by two co-actors. Pairs of participants completed a joint visual recognition task and, across different blocks of trials, the position of their own hands and of their partner's hands varied relative to the stimuli. In contrast to control studies conducted with participants acting alone, an object-specific recognition benefit was found across all hand location conditions. These data suggest that visual processing is, in some cases, sensitive to the posture of a co-actor.  相似文献   

5.
利用眼动追踪技术,探讨在左右手判断任务条件下人手心理旋转加工是否受到被试自身人手初始状态的影响。两个实验的反应时数据和眼动数据均发现:(1)心理旋转加工受被试自身人手初始状态的影响,表现出一致性效应;(2)显著的内旋效应;(3)被试心理旋转加工时注视点取样存在着不均衡性。这些结果表明:在左右手判断任务中,心理旋转加工的对象是被试自身人手的表象,是自我参照的心理旋转,并且内旋效应是由被试对自身人手表象进行旋转时受到人手生理机制约束所致,而不是被试旋转刺激图片的表象由“生理机制约束知识”影响所致  相似文献   

6.
ABSTRACT

Although previous work provides evidence that observers experience biases in visual processing when they view stimuli in perihand space, a few recent investigations have questioned the reliability of these near-hand effects. We addressed this controversy by running three pre-registered replication experiments. Experiment 1 was a replication of one of the initial studies on facilitated target detection near the hands in which participants performed an attentional cueing task while placing a single hand either near or far from the display. Experiment 2 tested the same paradigm while adopting the design of a recent experiment that called into question near-hand facilitation. Experiment 3 was a replication of a study in which hand proximity influenced working memory performance in a change detection paradigm. Across all three experiments, we found significant interactions between hand position and stimulus characteristics that indicated the hands’ presence altered visual processing, bolstering evidence favouring the robustness of near-hand effects.  相似文献   

7.
Although reasoning seems to be inextricably linked to seeing in the “mind's eye”, the evidence is equivocal. In three experiments, sighted, blindfolded sighted, and congenitally totally blind persons solved deductive inferences based on three sorts of relation: (a) visuo-spatial relations that are easy to envisage either visually or spatially, (b) visual relations that are easy to envisage visually but hard to envisage spatially, and (c) control relations that are hard to envisage both visually and spatially. In absolute terms, congenitally totally blind persons performed less accurately and more slowly than the sighted on all such tasks. In relative terms, however, the visual relations in comparison with control relations impeded the reasoning of sighted and blindfolded participants, whereas congenitally totally blind participants performed the same with the different sorts of relation. We conclude that mental images containing visual details that are irrelevant to an inference can even impede the process of reasoning. Persons who are blind from birth—and who thus do not tend to construct visual mental images—are immune to this visual-impedance effect.  相似文献   

8.
Hermens F  Gielen S 《Perception》2003,32(2):235-248
In this study we investigated the perception and production of line orientations in a vertical plane. Previous studies have shown that systematic errors are made when participants have to match oblique orientations visually and haptically. Differences in the setup for visual and haptic matching did not allow for a quantitative comparison of the errors. To investigate whether matching errors are the same for different modalities, we asked participants to match a visually presented orientation visually, haptically with visual feedback, and haptically without visual feedback. The matching errors were the same in all three matching conditions. Horizontal and vertical orientations were matched correctly, but systematic errors were made for the oblique orientations. The errors depended on the viewing position from which the stimuli were seen, and on the distance of the stimulus from the observer.  相似文献   

9.
Visuo‐manual interaction in visual short‐term memory (VSTM) has been investigated little, despite its importance in everyday tasks requiring the coordination of visual perception and manual action. This study examines the influence of a manual action performed during stimulus learning on a subsequent VSTM test for object appearance. The memory display comprised a sequence of briefly presented 1/f noise discs (i.e., possessing spectral properties akin to natural images), wherein each new stimulus was presented at a unique screen location. Participants either did (or did not) perform a concurrent manual action (spatial tapping) task requiring that a hand‐held stylus be moved to a position on a touch tablet that corresponded (or did not correspond) to the screen position of each new stimulus as it appeared. At test, a single stimulus was presented, either at one of the original screen positions, or at a new position. Two factors were examined: the execution (or otherwise) of spatial tapping at a corresponding or non‐corresponding position, and the presentation of test stimuli either at their original spatial positions, or at new positions. We find that spatial tapping at corresponding positions elevates VSTM performance by more than 15%, but this occurs only when stimulus positions are matched from memory to test display. Our findings suggest that multimodal attentional focus during stimulus encoding (incorporating visual, spatial, and manual components) leads to stronger, more robust memory representations. We posit several possible explanations for this effect.  相似文献   

10.
Hemispheric asymmetries were investigated by changing the horizontal position of stimuli that had to be remembered in a visuo-spatial short-term memory task. Observers looked at matrices containing a variable number of filled squares on the left or right side of the screen center. At stimulus offset, participants reproduced the positions of the filled squares in an empty response matrix. Stimulus and response matrices were presented in the same quadrant. We observed that memory performance was better when the matrices were shown on the left side of the screen. We distinguished between recall strategies that relied on visual or non-visual (verbal) cues and found that the effect of gaze position occurred more reliably in participants using visual recall strategies. Overall, the results show that there is a solid enhancement of visuo-spatial short-term memory when observers look to the left. In contrast, vertical position had no influence on performance. We suggest that unilateral gaze to the left activates centers in the right hemisphere contributing to visuo-spatial memory.  相似文献   

11.
In four experiments, participants made a speeded manual response to a tone and concurrently selected a cued visual target from a masked display for later unspeeded report. In contrast toaprevious study of H.Pashler (1991), systematic interactions between the two tasks were obtained. First, accuracy in both tasks decreased with decreasing stimulus (tone-display)-onset asynchrony (SOA)— presumably due to a conflict between stimulus and response coding. Second, spatial correspondence between manual response and visual target produced better performance in the visual task and, with short SOAs, in the tone task, too— presumably due to the overlap of the spatial codes used by stimulus- and response-selection processes. Third, manual responding slowed down with increasing SOA — reflecting either a functional bottleneck or strategic queuing of target selection and response selection. Results suggest that visual stimulus selection and manual response selection are distinct mechanisms that operate on common representations.  相似文献   

12.
Abstract

Reaching to targets in a virtual reality environment with misaligned visual feedback of the hand results in changes in movements (visuomotor adaptation) and sense of felt hand position (proprioceptive recalibration). We asked if proprioceptive recalibration arises even when the misalignment between visual and proprioceptive estimates of hand position is only experienced during movement. Participants performed a “shooting task” through the targets with a cursor that was rotated 30° clockwise relative to hand motion. Results revealed that, following training on the shooting task, participants adapted their reaches to all targets by approximately 16° and recalibrated their sense of felt hand position by 8°. Thus, experiencing a sensory misalignment between visual and proprioceptive estimates of hand position during movement leads to proprioceptive recalibration.  相似文献   

13.
Fifteen autistic and 15 normal Ss were trained to respond to a card containing two visual cues. After this training disctimination was established, the children were tested on the single cues in order to assess whether one or both stimuli had acquired control over their responding. The autistic children (12 of 15) gave evidence for stimulus overselectivity in that they responded correctly to only one of the two component cues. On the other hand, the normal children (12 of 15) showed clear evidence of control by both component cues of the training card. These results were consistent with previous studies, where autistics showed overselectivity when presented with multiple sensory input in several modalities. However, now autistic children appear to have difficulty responding to multiple cues even when both cues are in the same modality. These results were discussed in relation to the experimental literature on selective attention in normally functioning organisms.  相似文献   

14.
The authors examined the resolution of a discrepancy between visual and proprioceptive estimates of arm position in 10 participants. The participants fixed their right shoulder at 0°, 30°, or 60° of transverse adduction while they viewed a video on a head-mounted display that showed their right arm extended in front of the trunk for 30 min. The perceived arm position more closely approached the seen arm position on the display as the difference between the actual and visually displayed arm positions increased. In the extreme case of a 90° discrepancy, the seen arm position on the display was very gradually perceived as approaching the actual arm position. The magnitude of changes in sensory estimates was larger for proprioception (20%) than for vision (< 10%).  相似文献   

15.
Two experiments investigated competing explanations for the reversal of spatial stimulus—response (S—R) correspondence effects (i.e., Simon effects) with an incompatible S—R mapping on the relevant, nonspatial dimension. Competing explanations were based on generalized S—R rules (logical-recoding account) or referred to display—control arrangement correspondence or to S—S congruity. In Experiment 1, compatible responses to finger—name stimuli presented at left/right locations produced normal Simon effects, whereas incompatible responses to finger—name stimuli produced an inverted Simon effect. This finding supports the logical-recoding account. In Experiment 2, spatial S—R correspondence and color S—R correspondence were varied independently, and main effects of these variables were observed. The lack of an interaction between these variables, however, disconfirms a prediction of the display—control arrangement correspondence account. Together, the results provide converging evidence for the logical-recoding account. This account claims that participants derive generalized response selection rules (e.g., the identity or reversal rule) from specific S—R rules and inadvertently apply the generalized rules to the irrelevant (spatial) S—R dimension when selecting their response.  相似文献   

16.
Previous research on the interaction between manual action and visual perception has focused on discrete movements or static postures and discovered better performance near the hands (the near-hand effect). However, in everyday behaviors, the hands are usually moving continuously between possible targets. Therefore, the current study explored the effects of continuous hand motion on the allocation of visual attention. Eleven healthy adults performed a visual discrimination task during cyclical concealed hand movements underneath a display. Both the current hand position and its movement direction systematically contributed to participants’ visual sensitivity. Discrimination performance increased substantially when the hand was distant from but moving toward the visual probe location (a far-hand effect). Implications of this novel observation are discussed.  相似文献   

17.
The aim of this study was to investigate the extent to which tactile information that is unavailable for full conscious report can be accessed using partial-report procedures. In Experiment 1, participants reported the total number of tactile stimuli (up to six) presented simultaneously to their fingertips (numerosity judgment task). In another condition, after being presented with the tactile display, they had to detect whether or not the position indicated by a (visual or tactile) probe had previously contained a tactile stimulus (partial-report task). Participants correctly reported up to three stimuli in the numerosity judgment task, but their performance was far better in the partial-report task: Up to six stimuli were perceived at the shortest target-probe intervals. A similar pattern of results was observed when the participants performed a concurrent articulatory suppression task (Exp. 2). The results of a final experiment revealed that performance in the partial-report task was overall better for stimuli presented on the fingertips than for stimuli presented across the rest of the body surface. These results demonstrate that tactile information that is unavailable for report in a numerosity task can nevertheless sometimes still be accessed when a partial-report procedure is used instead.  相似文献   

18.
This study examined the effects of visual-verbalload (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: "No Load" meant no other stimuli were presented concurrently; "Free Load" meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and "Force Load" was the same as "Free Load," but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: "irrelevant," "compatible," and "incompatible" spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.  相似文献   

19.
When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed. If hand movements are needed to perform a search task, can they too be planned while visual information from the current position is still being processed? To find out we studied a visual search task in which participants had to move their hand to shift a window through which they could see the items. The task was to find an O in a circle of Cs. The size of the window and the sizes of the gaps in the Cs were varied. Participants made fast, smooth arm movements between items and adjusted their movements, when on the items, to the window size. On many trials the window passed the target and returned, indicating that the next movement had been planned before identifying the item that was in view.  相似文献   

20.
Typically, gait rehabilitation uses an invariant stimulus paradigm to improve gait related deficiencies. However, this approach may not be optimal as it does not incorporate gait complexity, or in more precise words, the variable fractal-like nature found in the gait fluctuations commonly observed in healthy populations. Aging which also affects gait complexity, resulting in a loss of adaptability to the surrounding environment, could benefit from gait rehabilitation that incorporates a variable fractal-like stimulus paradigm. Therefore, the present study aimed to investigate the effect of a variable fractal-like visual stimulus on the stride-to-stride fluctuations of older adults during overground walking. Additionally, our study aimed to investigate potential retention effects by instructing the participants to continue walking after turning off the stimulus. Older adults walked 8 min with i) no stimulus (self-paced), ii) a variable fractal-like visual stimulus and iii) an invariant visual stimulus. In the two visual stimuli conditions, the participants walked 8 additional minutes after the stimulus was turned off. Gait complexity was evaluated with the widely used fractal scaling exponent calculated through the detrended fluctuation analysis of the stride time intervals. We found a significant ~20% increase in the scaling exponent from the no stimulus to the variable fractal-like stimulus condition. However, no differences were found when the older adults walked to the invariant stimulus. The observed increase was towards the values found in the past to characterize healthy young adults. We have also observed that these positive effects were retained even when the stimulus was turned off for the fractal condition, practically, acutely restoring gait complexity of older adults. These very promising results should motivate researchers and clinicians to perform clinical trials in order to investigate the potential of visual variable fractal-like stimulus for gait rehabilitation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号