首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Blind readers were tested using two methods of reading text displayed by an Apple microcomputer. The first method employed an Optacon system, a device that displays tactile representations of single characters, and the second used an interactive single electronic braille cell that displayed grade 1 braille characters. The results demonstrated no difference in accuracy or reading speed between these two methods. Thus, the serial presentation of braille characters at a single position appears to be a viable method of information transfer between computers and braille readers.  相似文献   

2.
To guide the movement of the body through space, the brain must constantly monitor the position and movement of the body in relation to nearby objects. The effective piloting of the body to avoid or manipulate objects in pursuit of behavioural goals requires an integrated neural representation of the body (the body schema) and of the space around the body (peripersonal space). In the review that follows, we describe and evaluate recent results from neurophysiology, neuropsychology, and psychophysics in both human and non-human primates that support the existence of an integrated representation of visual, somatosensory, and auditory peripersonal space. Such a representation involves primarily visual, somatosensory, and proprioceptive modalities, operates in body-part-centred reference frames, and demonstrates significant plasticity. Recent research shows that the use of tools, the viewing of ones body or body parts in mirrors, and in video monitors, may also modulate the visuotactile representation of peripersonal space.
Nicholas P. HolmesEmail: Phone: +44-1865-271307Fax: +44-1865-310447
  相似文献   

3.
ObjectivesWe compared the mental representation of sound directions in blind football players, blind non-athletes and sighted individuals.DesignStanding blindfolded in the middle of a circle with 16 loudspeakers, participants judged whether the directions of two subsequently presented sounds were similar or not.MethodStructure dimensional analysis (SDA) was applied to reveal mean cluster solutions for the groups.ResultsHierarchical cluster analysis via SDA resulted in distinct representation structures of sound directions. The blind football players' mean cluster solution consisted of pairs of neighboring directions. The blind non-athletes also clustered the directions in pairs, but included non-adjacent directions. In the sighted participants' structure, frontal directions were clustered pairwise, the absolute back was singled out, and the side regions accounted for more directions.ConclusionsOur results suggest that the mental representation of egocentric auditory space is influenced by sight and by the level of expertise in auditory-based orientation and navigation.  相似文献   

4.
According to embodied cognition, bodily interactions with our environment shape the perception and representation of our body and the surrounding space, that is, peripersonal space. To investigate the adaptive nature of these spatial representations, we introduced a multisensory conflict between vision and proprioception in an immersive virtual reality. During individual bimanual interaction trials, we gradually shifted the visual hand representation. As a result, participants unknowingly shifted their actual hands to compensate for the visual shift. We then measured the adaptation to the invoked multisensory conflict by means of a self-localization and an external localization task. While effects of the conflict were observed in both tasks, the effects systematically interacted with the type of localization task and the available visual information while performing the localization task (i.e., the visibility of the virtual hands). The results imply that the localization of one’s own hands is based on a multisensory integration process, which is modulated by the saliency of the currently most relevant sensory modality and the involved frame of reference. Moreover, the results suggest that our brain strives for consistency between its body and spatial estimates, thereby adapting multiple, related frames of reference, and the spatial estimates within, due to a sensory conflict in one of them.  相似文献   

5.
Summary The purpose of the present experiment was to analyse the metric of haptic space. The subjects were shown a goal-point, after having felt the sides of a right triangle with differing lengths. The subjects were then required to estimate the position of the goal-point in one of two ways, either along the hypotenuse from the starting point (a cognitive spatial orientation task) or via the sides touched previously (a perceptual spatial orientation task). They were asked to do this either by making a hand movement which would approximate the distance (motor estimate) or verbally (verbal estimate). There were four groups of subjects: congenitally blind (CB), adventitiously blind (AB), sighted under blindfold (SB), and sighted people with visual pre-orientation (SV). 10 subjects were in each group. The major results of the experiment were: (1) With all groups but SV a distortion was observed in the estimates of the shortest distance from the Euclidian metric to the city-block-metric. (2) The groups AB and SB produced the largest distortion from the Euclidian metric. (3) In the case of motor estimates, both cognitive and perceptual spatial orientation tasks led to congruent metrics. (4) With the exception of group SV verbal and motor estimates led to divergent metrics. The results are considered from the points of view of (a) empiricist theories, (b) theories of equal laws of structure for all sensory modalities, and (c) the hypothesis of transposition.This investigation was supported in part by the German Research Association (Deutsche Forschungsgemeinschaft); head of the research project Prof. Dr. F. Merz.I thank the staff of the Blind Mobility Research Unit of the University of Nottingham for the support in correcting the English translation.  相似文献   

6.
The extended mind thesis (EM) asserts that some cognitive processes are (partially) composed of actions consisting of the manipulation and exploitation of environmental structures. Might some processes at the root of social cognition have a similarly extended structure? In this paper, I argue that social cognition is fundamentally an interactive form of space management—the negotiation and management of “we-space”—and that some of the expressive actions involved in the negotiation and management of we-space (gesture, touch, facial and whole-body expressions) drive basic processes of interpersonal understanding and thus do genuine social-cognitive work. Social interaction is a kind of extended social cognition, driven and at least partially constituted by environmental (non-neural) scaffolding. Challenging the Theory of Mind paradigm, I draw upon research from gesture studies, developmental psychology, and work on Moebius Syndrome to support this thesis.  相似文献   

7.
Philosophical considerations as well as several recent studies from neurophysiology, neuropsychology, and psychophysics converged in showing that the peripersonal space (i.e. closely surrounding the body-parts) is structured in a body-centred manner and represented through integrated sensory inputs. Multisensory representations may deserve the function of coding peripersonal space for avoiding or interacting with objects. Neuropsychological evidence is reviewed for dynamic interactions between space representations and action execution, as revealed by the behavioural effects that the use of a tool, as a physical extension of the reachable space, produces on visual-tactile extinction. In particular, tool-use transiently modifies action space representation in a functionally effective way. The possibility is discussed that the investigation of multisensory space representations for action provides an empirical way to consider in its specificity pre-reflexive self-consciousness by considering the intertwining of self-relatedness and object-directness of spatial experience shaped by multisensory and sensorimotor integrations.  相似文献   

8.
Events are often perceived in multiple modalities. The co-occurring proximal visual and auditory stimuli events are mostly also causally linked to the distal event, which makes it difficult to evaluate whether learned correlation or perceived causation guides binding in multisensory perception. Piano tones are an interesting exception: They are associated with the act of the pianist striking keys, an event that is visible to the perceiver, but directly results from hammers hitting strings, an event that typically is not visible to the perceiver. We examined the influence of seeing the hammer or the keystroke on auditory temporal order judgments (TOJs). Participants judged the temporal order of a dog bark and a piano tone, while seeing the piano stroke shifted temporally relative to its audio signal. Visual lead increased “piano-first” responses in auditory TOJ, but more so if the associated keystroke was visible than if the sound-producing hammer was visible, even though both were equally visually salient. This provides evidence for a learning account of audiovisual perception.  相似文献   

9.
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.  相似文献   

10.
The present study investigated how multisensory integration in peripersonal space is modulated by limb posture (i.e. whether the limbs are crossed or uncrossed) and limb congruency (i.e. whether the observed body part matches the actual position of one’s limb). This was done separately for the upper limbs (Experiment 1) and the lower limbs (Experiment 2). The crossmodal congruency task was used to measure peripersonal space integration for the hands and the feet. It was found that the peripersonal space representation for the hands but not for the feet is dynamically updated based on both limb posture and limb congruency. Together these findings show how dynamic cues from vision, proprioception, and touch are integrated in peripersonal limb space and highlight fundamental differences in the way in which peripersonal space is represented for the upper and lower extremity.  相似文献   

11.
Audiotactile multisensory interactions in human information processing   总被引:1,自引:0,他引:1  
Abstract:  The last few years has seen a very rapid growth of interest in how signals from different sensory modalities are integrated in the brain to form the unified percepts that fill our daily lives. Research on multisensory interactions between vision, touch, and proprioception has revealed the existence of multisensory spatial representations that code the location of external events relative to our own bodies. In this review, we highlight recent converging evidence from both human and animal studies that has revealed that spatially-modulated multisensory interactions also occur between hearing and touch, especially in the space immediately surrounding the head. These spatial audiotactile interactions for stimuli presented close to the head can affect not only the spatial aspects of perception, but also various other non-spatial aspects of audiotactile information processing. Finally, we highlight some of the most important questions for future research in this area.  相似文献   

12.
In spatial sequence synaesthesia (SSS) ordinal stimuli are perceived as arranged in peripersonal space. Using fMRI, we examined the neural bases of SSS and colour synaesthesia for spoken words in a late-blind synaesthete, JF. He reported days of the week and months of the year as both coloured and spatially ordered in peripersonal space; parts of the days and festivities of the year were spatially ordered but uncoloured. Words that denote time-units and triggered no concurrents were used in a control condition. Both conditions inducing SSS activated the occipito-parietal, infero-frontal and insular cortex. The colour area hOC4v was engaged when the synaesthetic experience included colour. These results confirm the continued recruitment of visual colour cortex in this late-blind synaesthetes. Synaesthesia also involved activation in inferior frontal cortex, which may be related to spatial memory and detection, and in the insula, which might contribute to audiovisual integration related to the processing of inducers and concurrents.  相似文献   

13.
Our representation of peripersonal space does not always accurately reflect the physical world. An example of this is pseudoneglect, a phenomenon in which neurologically normal individuals bisect to the left of the veridical midpoint, reflecting an overrepresentation of the left portion of space compared with the right one. Consistent biases have also been observed in the vertical and radial planes. It is an open question whether these biases depend on normal visual experience for their occurrence. Here we systematically investigated this issue by testing blindfolded sighted and early blind individuals in a haptic line bisection task. Critically, we found a robust leftward bias in all participants. In the vertical and radial planes, sighted participants showed a consistent downward and proximal bias. Conversely, the directional bias in blind participants was dependent on the final movement direction; thus, there was no general bias in either direction. These findings are discussed in terms of different reference frames adopted by sighted and blind participants when encoding space.  相似文献   

14.
Ash A  Palmisano S  Kim J 《Perception》2011,40(2):155-174
We examined vection induced during physical or simulated head oscillation along either the horizontal or depth axis. In the first two experiments, during active conditions, subjects viewed radial-flow displays which simulated viewpoint oscillation that was either in-phase or out-of-phase with their own tracked head movements. In passive conditions, stationary subjects viewed playbacks of displays generated in earlier active conditions. A third control, experiment was also conducted where physical and simulated fore-aft oscillation was added to a lamellar flow display. Consistent with ecology, when active in-phase horizontal oscillation was added to a radial-flow display it modestly improved vection compared to active out-of-phase and passive conditions. However, when active fore-aft head movements were added to either a radial-flow or a lamellar-flow display, both in-phase and out-of-phase conditions produced very similar vection. Our research shows that consistent multisensory input can enhance the visual perception of self-motion in some situations. However, it is clear that multisensory stimulation does not have to be consistent (i.e., ecological) to generate compelling vection in depth.  相似文献   

15.
It is well accepted that multisensory integration has a facilitative effect on perceptual and motor processes, evolutionarily enhancing the chance of survival of many species, including humans. Yet, there is limited understanding of the relationship between multisensory processes, environmental noise, and children's cognitive abilities. Thus, this study investigated the relationship between multisensory integration, auditory background noise, and the general intellectual abilities of school-age children (N = 88, mean age = 9 years, 7 months) using a simple audiovisual detection paradigm. We provide evidence that children with enhanced multisensory integration in quiet and noisy conditions are likely to score above average on the Full-Scale IQ of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). Conversely, approximately 45% of tested children, with relatively low verbal and nonverbal intellectual abilities, showed reduced multisensory integration in either quiet or noise. Interestingly, approximately 20% of children showed improved multisensory integration abilities in the presence of auditory background noise. The findings of the present study suggest that stable and consistent multisensory integration in quiet and noisy environments is associated with the development of optimal general intellectual abilities. Further theoretical implications are discussed.  相似文献   

16.
Humans tend to represent numbers in the form of a mental number line. Here we show that the mental number line can modulate the representation of peripersonal haptic space in a crossmodal fashion and that this interaction is not visually mediated. Sighted and early-blind participants were asked to haptically explore rods of different lengths and to indicate midpoints of those rods. During each trial, either a small (2) or a large (8) number was presented in the auditory modality. When no numbers were presented, participants tended to bisect the rods to the left of the actual midpoint, consistent with the notion of pseudoneglect. In both groups, this bias was significantly increased by the presentation of a small number and was significantly reduced by the presentation of a large number. Hence, spatial shifts of attention induced by number processing are not limited to visual space or embodied responses but extend to haptic peripersonal space and occur crossmodally without requiring the activation of a visuospatial representation.  相似文献   

17.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

18.
Recent studies on the conceptualization of abstract concepts suggest that the concept of time is represented along a left-right horizontal axis, such that left-to-right readers represent past on the left and future on the right. Although it has been demonstrated with strong consistency that the localization (left or right) of visual stimuli could modulate temporal judgments, results obtained with auditory stimuli are more puzzling, with both failures and successes at finding the effect in the literature. The present study supports an account based on the relative relevance of visual versus auditory-spatial information in the creation of a frame of reference to map time: The auditory location of words interacted with their temporal meaning only when auditory information was made more relevant than visual spatial information by blindfolding participants.  相似文献   

19.
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.  相似文献   

20.
The configuration of mental representation of space plays a major role in successful navigational activities. Therefore, navigational assistance for pedestrians who are blind should help them to better configure their mental representation of the environment. In this paper, we propose and exploit a computational model for the mental representation of urban areas as an aid to orientation and navigation for visually impaired pedestrians. Our model uses image schemata to capture the spatial semantics and configural elements of urban space necessary for this purpose. These image schemata are schematic structures that are continually requested by individuals in their perception, bodily movement and interaction with surrounding objects. Our proposed model also incorporates a hierarchical structure to provide different levels of detail tied to appropriate spatial perspectives at each scale. We presume that such computational model will help us to develop an appropriate structure of spatial data used to assist the target population. At the end of the paper, we illustrate the utility of our configural model by developing a typical scenario for the navigation of a blind pedestrian in an urban area.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号