首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 661 毫秒
1.
Vernat, J.-P. & Gordon, M. S. (2010). Indirect interception actions by blind and visually impaired perceivers: Echolocation for interceptive actions. Scandinavian Journal of Psychology, 51, 75–83.
This research examined the acoustic information used to support interceptive actions by the blind. Congenitally blind and severely visually impaired participants (all wearing an opaque, black eye-mask) were asked to listen to a target ball rolling down a track. In response, participants rolled their own ball along a perpendicular path to intercept the target. To better understand what information was used the echoic conditions and rolling dynamics of the target were varied across test sessions. In addition the rolling speed of the target and the distance of the participant from the target were varied across trials. Results demonstrated that participants tended to perform most accurately at moderate speeds and distances, overestimating the target's arrival at the fastest speed, and underestimating it at the slowest speed. However, changes to the target's dynamics, that is, the amount of deceleration it underwent on approach, did not strongly influence performance. Echoic conditions were found to affect performance, as participants were slightly more accurate in conditions with faster, higher-intensity echoes. Based on these results blind individuals in this research seemed to be using spatial and temporal cues to coordinate their interceptive actions.  相似文献   

2.
In 3 experiments, the authors investigated and described how individuals control manual interceptive movements to slowly moving targets. Participants (N = 8 in each experiment) used a computer mouse and a graphics tablet assembly to manually intercept targets moving across a computer screen toward a marked target zone. They moved the cursor so that it would arrive in the target zone simultaneously with the target. In Experiment 1, there was a range of target velocities, including some very slow targets. In Experiment 2, there were 2 movement distance conditions. Participants moved the cursor either the same distance as the target or twice as far. For both experiments, hand speed was found to be related to target speed, even for the very slowly moving targets and when the target-to-cursor distance ratios were altered, suggesting that participants may have used a strategy similar to tracking. To test that notion, in Experiment 3, the authors added a tracking task in which the participants tracked the target cursor into the target zone. Longer time was spent planning the interception movements; however, there was a longer time in deceleration for the tracking movements, suggesting that more visually guided trajectory updates were made in that condition. Thus, although participants scaled their interception movements to the cursor speed, they were using a different strategy than they used in tracking. It is proposed that during target interception, anticipatory mechanisms are used rather than the visual feedback mechanism used when tracking and when pointing to stationary targets.  相似文献   

3.
Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience.  相似文献   

4.
In this research, the impact of visual experience on the capacity to use egocentric (body-centered) and allocentric (object-centered) representations in combination with categorical (invariant non-metric) and coordinate (variable metric) spatial relations was examined. Participants memorized through haptic (congenitally blind, adventitiously blind, and blindfolded) and haptic + visual (sighted) exploration triads of 3D objects and then they were asked to judge: "which object was closest/farthest to you?" (egocentric-coordinate); "which object was on your left/right?" (egocentric-categorical); "which object was closest/farthest to a target object (e.g., cone)?" (allocentric-coordinate); "which object was on the left/right of the target object (e.g., cone)?" (allocentric-categorical). The results showed a slowdown in processing time when congenitally blind people provided allocentric-coordinate judgments and adventitiously blind people egocentric-categorical judgments. Moreover, in egocentric judgments, adventitiously blind participants were less accurate than sighted participants. However, the overall performance was quite good and this supports the idea that the differences observed are more quantitative than qualitative. The theoretical implications of these results are discussed.  相似文献   

5.
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images.  相似文献   

6.
《Acta psychologica》2013,142(3):394-401
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.  相似文献   

7.
We investigated the role of vision in tactile enumeration within and outside the subitizing range. Congenitally blind and sighted (blindfolded) participants were asked to enumerate quickly and accurately the number of fingers stimulated. Both groups of participants enumerated one to three fingers quickly and accurately but were much slower and less accurate with four to nine fingers. Within the subitizing range, blind participants performed no differently from both sighted (blindfolded) and sighted-seeing participants. Outside of the subitizing range, blind and sighted-seeing participants showed better performance than did sighted-blindfolded participants, suggesting that lack of access to the predominant sensory modality does affect performance. Together, these findings further support the claim that subitizing is a general perceptual mechanism and demonstrate that vision is not necessary for the development of the subitizing mechanism.  相似文献   

8.
People are highly skilled at intercepting moving objects and are capable of remarkably accurate timing. The timing accuracy required depends upon the period of time for which contact with a moving target is possible--the "time window" for successful interception. Studies of performance in an experimental interception task that allows this time window to be manipulated suggest that people change aspects of their performance (movement time, MT, and movement speed) in response to changes in the time window. However, this research did not establish whether the observed changes in performance were the results of a response to the time window per se or of independent responses to the quantities defining the time window (the size and speed of a moving target). Experiment 1 was designed to resolve this issue. The speed and size of the target were both varied, resulting in variations in the time window; MT was the primary dependent measure. Predictions of the hypothesis that people respond directly to changes in the time window were verified. Predictions of the alternative hypothesis that responses to changes in target speed and size are independent of one another were not supported. Experiment 2 examined how the type of performance change observed in Experiment 1 was affected by changing the time available for executing the interception. The time available and the target speed were varied, and MT was again the primary dependent measure. MT was smaller when there was less time available, and the effect of target speed (and hence the time window) on MT was also smaller, becoming undetectable at the shortest available time (0.4 s). The results of the two experiments are interpreted as providing information about the "rule" used to preprogramme movement parameters in anticipatory interceptive actions.  相似文献   

9.
Five preliminary experiments on sighted individuals revealed marked overestimation on an object size-estimation task using a bimanual response. These experiments ruled out the possibility that overestimation was due to the mode of visual presentation (whether two-dimensional or three-dimensional), the input modality (visual or kinesthetic), or the influence of other visual cues. The main experiment then investigated whether these distortions are due to visual experience by using a variant of the same task to test 24 blind and 24 sighted control participants. Remarkably, the sighted control participants overestimated object size, on average, but the blind participants did not. A follow-up experiment demonstrated that visual memory was the primary influence causing the size over-estimations. We conclude that blind individuals are more accurate than sighted individuals in representing the size of familiar objects because they rely on manual representations, which are less influenced by visual experience than are visual memory representations.  相似文献   

10.
11.
Although reasoning seems to be inextricably linked to seeing in the “mind's eye”, the evidence is equivocal. In three experiments, sighted, blindfolded sighted, and congenitally totally blind persons solved deductive inferences based on three sorts of relation: (a) visuo-spatial relations that are easy to envisage either visually or spatially, (b) visual relations that are easy to envisage visually but hard to envisage spatially, and (c) control relations that are hard to envisage both visually and spatially. In absolute terms, congenitally totally blind persons performed less accurately and more slowly than the sighted on all such tasks. In relative terms, however, the visual relations in comparison with control relations impeded the reasoning of sighted and blindfolded participants, whereas congenitally totally blind participants performed the same with the different sorts of relation. We conclude that mental images containing visual details that are irrelevant to an inference can even impede the process of reasoning. Persons who are blind from birth—and who thus do not tend to construct visual mental images—are immune to this visual-impedance effect.  相似文献   

12.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

13.
14.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

15.
When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.  相似文献   

16.
Although reasoning seems to be inextricably linked to seeing in the “mind's eye”, the evidence is equivocal. In three experiments, sighted, blindfolded sighted, and congenitally totally blind persons solved deductive inferences based on three sorts of relation: (a) visuo-spatial relations that are easy to envisage either visually or spatially, (b) visual relations that are easy to envisage visually but hard to envisage spatially, and (c) control relations that are hard to envisage both visually and spatially. In absolute terms, congenitally totally blind persons performed less accurately and more slowly than the sighted on all such tasks. In relative terms, however, the visual relations in comparison with control relations impeded the reasoning of sighted and blindfolded participants, whereas congenitally totally blind participants performed the same with the different sorts of relation. We conclude that mental images containing visual details that are irrelevant to an inference can even impede the process of reasoning. Persons who are blind from birth—and who thus do not tend to construct visual mental images—are immune to this visual-impedance effect.  相似文献   

17.
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.  相似文献   

18.
DB, the first blindsight case to be tested extensively (Weiskrantz, 1986) has demonstrated the ability to detect and discriminate a range of visual stimuli presented within his perimetrically blind visual field defect. In a temporal two alternative forced choice (2AFC) detection experiment we have investigated the limits of DB's detection ability within his field defect. Blind field performance was compared to his sighted field performance and to an age-matched control group (n=6). DB reliably detected the presence of a small (2 degrees ), low contrast (7%), 4.6c/ degrees Gabor patch with the same space-averaged luminance as the background presented within his blind field but performed at chance levels at the same eccentricity (11.3 degrees ) within his sighted field. Investigation of detection as a function of stimulus contrast revealed DB's ability to detect the presence of an 8% contrast stimulus within his blind field, compared to 12% in his sighted field. No significant difference in detection performance between DB's sighted field and the performance of six age-matched control participants suggests poor sighted field performance does not account for the results. Monocular testing also rules out differences between the eyes as an explanation, suggesting that DB demonstrates superior detection for certain stimuli within his visual field defect compared to normal vision.  相似文献   

19.
The influence of visual experience on visual and spatial imagery   总被引:1,自引:0,他引:1  
Differences are reported between blind and sighted participants on a visual-imagery and a spatial-imagery task, but not on an auditory-imagery task. For the visual-imagery task, participants had to compare object forms on the basis of a (verbally presented) object name. In the spatial-imagery task, they had to compare angular differences on the basis of the position of clock hands on two clock faces, again only on the basis of verbally presented clock times. Interestingly, there was a difference between early-blind and late-blind participants on the visual-imagery and the spatial-imagery tasks: late-blind participants made more errors than sighted people on the visual-imagery task, while early-blind participants made more errors than sighted people on the spatial-imagery task. This difference suggests that, for visual (form) imagery, people use the channel currently available (haptic for the blind; visual for the sighted). For the spatial-imagery task in this study reliance on haptic processing did not seem to suffice, and people benefited from visual experience and ability. However, the difference on the spatial-imagery task between early-blind and sighted people in this study might also be caused by differences in experience with the analogue clock faces that formed the basis for the spatial judgments.  相似文献   

20.
In the property listing task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, “barks,” and “is a pet” may be produced). In conceptual property norming (CPNs) studies, participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis-a-vis theories attempting to characterize abstract concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号