首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Typically, multiple cues can be used to generate a particular percept. Our area of interest is the extent to which humans are able to synergistically combine cues that are generated when moving through an environment. For example, movement through the environment leads to both visual (optic-flow) and vestibular stimulation, and studies have shown that non-human primates are able to combine these cues to generate a more accurate perception of heading than can be obtained with either cue in isolation. Here we investigate whether humans show a similar ability to synergistically combine optic-flow and vestibular cues. This was achieved by determining the sensitivity to optic-flow stimuli while physically moving the observer, and hence producing a vestibular signal, that was either consistent with the optic-flow signal, eg a radially expanding pattern coupled with forward motion, or inconsistent with it, eg a radially expanding pattern with backward motion. Results indicate that humans are more sensitive to motion-in-depth optic-flow stimuli when they are combined with complementary vestibular signals than when they are combined with conflicting vestibular signals. These results indicate that in humans, like in nonhuman primates, there is perceptual integration of visual and vestibular signals.  相似文献   

2.
A study is reported of the relations between vestibular sensitivity and vection chronometry in healthy human adults. Twenty-three subjects were examined. For both vestibular and vection investigations, the subjects were seated in an armchair with the spinal axis aligned with the earth vertical and the head normally erect. The subjects' vestibular thresholds for detection of vertical upward accelerations were assessed by a double-staircase psychophysical method. The subjects' vection onset latencies were measured for both upward and downward directions. Since the vection onset latencies are presumed to be shortened by the decrease of the conflict between visual and vestibular afferents, the less-vestibular-sensitive subjects were hypothesised to have shorter vection onset latencies than the more-vestibular-sensitive ones. As expected, the results indicate a negative correlation between vestibular thresholds and vection onset latencies: the higher the vestibular thresholds, the lower the vection onset latencies.  相似文献   

3.
Individual speechreading abilities have been linked with a range of cognitive and language-processing factors. The role of specifically visual abilities in relation to the processing of visible speech is less studied. Here we report that the detection of coherent visible motion in random-dot kinematogram displays is related to speechreading skill in deaf, but not in hearing, speechreaders. A control task requiring the detection of visual form showed no such relationship. Additionally, people born deaf were better speechreaders than hearing people on a new test of silent speechreading.  相似文献   

4.
5.
6.
Short-term memory processes in the deaf   总被引:1,自引:0,他引:1  
  相似文献   

7.
8.
9.
10.
11.
In this study, we investigated how deaf children express their anger towards peers and with what intentions. Eleven-year-old deaf children (n=21) and a hearing control group (n=36) were offered four vignettes describing anger-evoking conflict situations with peers. Children were asked how they would respond, how the responsible peer would react, and what would happen to their relationship. Deaf children employed the communicative function of anger expression differently from hearing children. Whereas hearing children used anger expression to reflect on the anguish that another child caused them, deaf children used it rather bluntly and explained less. Moreover, deaf children expected less empathic responses from the peer causing them harm. Both groups did, however, expect equally often that the relationship with the peer would stay intact. These findings are discussed in the light of deaf children's impaired emotion socialization secondary to their limited communication skills.  相似文献   

12.
The handedness patterns of 226 deaf high-school and college students were compared to those of 210 college students with normal hearing. Both groups evidenced many more right-handed than left-handed members, as determined by responses to a hand preference questionnaire and performance on an activity test battery. There was, however, a significantly higher incidence of left-handedness among the deaf subjects than among the hearing. Moreover, the left-handed deaf students were found to be less likely to have deaf relatives, and to have been introduced to sign language later in their development than the deaf student population as a whole. These findings were interpreted as showing that age of acquisition of language was related to the development of handedness patterns, whereas auditory processing experience probably was not.  相似文献   

13.
Temporal processing in deaf signers   总被引:4,自引:0,他引:4  
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.  相似文献   

14.
15.
The nature of hemispheric processing in the prelingually deaf was examined in a picture-letter matching task. It was hypothesized that linguistic competence in the deaf would be associated with normal or near-normal laterality (i.e., a left hemisphere advantage for analytic linguistic tasks). Subjects were shown a simple picture of a common object (e.g., lamp), followed by brief unilateral presentation of a manually signed or orthographic letter, and they had to indicate as quickly as possible whether the letter was present in the spelling of the object's label. While hearing subjects showed a marked left hemisphere advantage, no such superiority was found for either a linguistically skilled or unskilled group of deaf students. In the skilled group, however, there was a suggestion of a right hemisphere advantage for manually signed letters. It was concluded that while hemispheric asymmetry of function does not develop normally in the deaf, the absence of this normal pattern does not preclude the development of the analytic skills needed to deal with the structure of language.  相似文献   

16.
17.
A series of experiments was carried out to determine the type of linguistic input used by profoundly prelinguistically deaf subjects who had acquired a phonological code which enabled them to match homophones and identify rhymes. The results indicated that the tasks were primarily done by using visual information from lipreading, and that the subjects did not rely greatly on similarities of written representation, lexical information, or motor feedback from the articulators to perform the phonological matching tasks.  相似文献   

18.
19.
20.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号