首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
To examine the processing of sequentially presented letters of familiar and nonsense words, especially among Ss of vastly differing experience on sequential tasks, three groups of Ss were tested on letters of words spelled sequentially on an alphanumeric display and on letters of words fingerspelled. These were a deaf group (N=33) with little or no hearing and who varied in their fingerspelling ability; a staff group (N=12) who taught fingerspelling and were highly proficient; and a hearing group (N=19). Of principal interest was the finding that the hearing Ss did better on nonsense letter recognition, while the deaf group did better on word recognition. Word length was important except to the staff Ss on fingerspelled words, which also suggests that concentration on fingerspelling proficiency forces attention to the whole word and not its component letters. Hearing Ss, who are the group faced with an unfamiliar task, seemed to attend to each letter and hence had more difficulty with recognition of the longer unit.  相似文献   

3.
4.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

5.
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.  相似文献   

6.
This experiment assessed the effect of different payoff matrices on 6 deaf and 6 hearing subjects on a visual brightness discrimination task. Subjects were required to make forced-choice responses to three different monetary payoff conditions, designed to induce a liberal, a conservative, and an equal-bias response criterion, respectively. The results showed that the deaf did not select the superior response strategies they had exhibited in a previous study (Bross, 1979) on the effect of changes in stimulus probability. Furthermore, the deaf earned significantly less money than the controls for all three conditions, indicating that the introduction of motivational demands affects their response strategies adversely.  相似文献   

7.
Visual cognitive differences between hearing (N = 16) and deaf (N = 32) high-school and middle-school students were studied. Visual tasks were presented on a microcomputer and response latencies were collected. Significant differences were noted between the deaf and normal groups but not between total communication deaf and oral deaf students. These differences support the hypothesis that deaf students prefer a visual cognitive strategy. Implications for educating the deaf are discussed.  相似文献   

8.
9.
Based on anticipatory looking and reactions to violations of expected events, infants have been credited with 'theory of mind' (ToM) knowledge that a person's search behaviour for an object will be guided by true or false beliefs about the object's location. However, little is known about the preconditions for looking patterns consistent with belief attribution in infants. In this study, we compared the performance of 17- to 26-month-olds on anticipatory looking in ToM tasks. The infants were either hearing or were deaf from hearing families and thus delayed in communicative experience gained from access to language and conversational input. Hearing infants significantly outperformed their deaf counterparts in anticipating the search actions of a cartoon character that held a false belief about a target-object location. By contrast, the performance of the two groups in a true belief condition did not differ significantly. These findings suggest for the first time that access to language and conversational input contributes to early ToM reasoning.  相似文献   

10.
11.
Children from populations lacking verbal proficiency were given an interference list of paired associates (high within-list stimulus similarity) or 1 of 2 noninterference lists (low stimulus similarity), under a standard control condition (pictorial items side by side) or an imagery condition (items depicted as interacting). In Experiment I, with deaf children 6–10 years old, the imagery condition facilitated performance on the interference list, mainly by reducing generalization errors. There was significant interference only in the control condition. Unexpectedly, imagery failed to improve performance on the noninterference lists. In Experiment II, with hearing children 4–5 years old, there was significant interference in both the imagery and the control condition. Imagery significantly facilitated performance in all lists, but did not reduce interference, apparently because it did not reduce generalization errors. Thus, imagery (a) facilitates performance by increasing the memorability of stimulus-response associations, and (b) reduces interference by reducing confusion among similar stimuli.  相似文献   

12.
13.
In Experiment 1 neither hearing nor prelingually deaf signing adolescents showed marked lateralization for lexical decision but, unlike the hearing, the deaf were not impaired by the introduction of pseudohomophones. In Experiment 2 semantic categorization produced a left hemisphere advantage in the hearing for words but not pictures whereas in the deaf words and signs but not pictures showed a right hemisphere advantage. In Experiment 3 the lexical decision and semantic categorization findings were confirmed and both groups showed a right hemisphere advantage for a face/nonface decision task. The possible effect of initial language acquisition on the development of hemispheric lateralization for language is discussed.  相似文献   

14.
Parafoveal attention in congenitally deaf and hearing young adults   总被引:3,自引:1,他引:2  
This reaction-time study compared the performance of 20 congenitally and profoundly deaf, and 20 hearing college students on a parafoveal stimulus detection task in which centrally presented prior cues varied in their informativeness about stimulus location. In one condition, subjects detected a parafoveally presented circle with no other information being present in the visual field. In another condition, spatially complex and task-irrelevant foveal information was present which the subjects were instructed to ignore. The results showed that although both deaf and hearing people utilized cues to direct attention to specific locations and had difficulty in ignoring foveal information, deaf people were more proficient in redirecting attention from one spatial location to another in the presence of irrelevant foveal information. These results suggest that differences exist in the development of attentional mechanisms in deaf and hearing people. Both groups showed an overall right visual-field advantage in stimulus detection which was attenuated when the irrelevant foveal information was present. These results suggest a left-hemisphere superiority for detection of parafoveally presented stimuli independent of cue informativeness for both groups.  相似文献   

15.
16.
17.
Summary The semantic-priming paradigm was used to investigate the effect of semantic context on the latency to identify visually ambiguous and unambiguous targets. Ambiguous targets had reversible figure-ground organizations analogous to Rubin's vase-profiles picture in which the figural vase is shaped by the same contour that, if reassigned, shapes two figural profiles. The two organizations of the ambiguous targets were either a series of black irregular shapes or a familiar word in white letters; subjects were required to achieve the latter organization. Unambiguous target words were not reversible. Related primes, unlike unrelated primes, facilitated the identification of both types of target; but the magnitude of facilitation was greater for ambiguous targets. The results demonstrate that semantic context influences the speed of figure-ground organization and add perceptual ambiguity to the target manipulations that interact with context to affect encoding.  相似文献   

18.
The present study examined the nature of reading skills in congenitally deaf and hearing children 7–19 years of age. Deaf children were drawn from oralist and total communication programs. A visual detection task was designed to assess the extent of phonological coding and chunking used in reading a story of various degrees of syntactic, semantic, and orthographic complexity. The results provide evidence that (1) like hearing children, deaf children tend to use orthographic regularities in their reading: (2) there is no relation in the deaf child's performance between sensitivity to orthographic regularities and the type of communication method used in training; and (3) hearing and deaf readers use qualitatively similar psycholinguistic strategies in their processing of a story.  相似文献   

19.
In three experiments, deaf children in the age range of 6 years, 10 months to 15 years, 5 months were presented with continuous lists of items, and for each item they had to indicate whether it had appeared before on the list. Later items were related to preceding items either in surface form or in meaning or were unrelated. False-recognition errors (i.e., “yes” responses to new items) served as an index of memorial coding. In one experiment, the items presented to the subjects were printed words. The results of this experiment showed a false-recognition effect (i.e., more errors to related words than to unrelated words) for both semantically related words and orthographically similar words. In the other two experiments, the subjects viewed a series of manual signs on videotape. In these experiments, there was a false-recognition effect for signs related semantically and for signs related cherologically (i.e., similar in terms of their manual production). These results establish orthography and cherology as effective memorial codes for deaf children. The finding of a consistently strong semantic effect for young deaf children stands in contrast to findings of weak semantic effects in false-recognition studies with young hearing children. The ascendancy of semantic codes for deaf children was attributed to the absence of competition from the speech code which dominates the linguistic memory of hearing children.  相似文献   

20.
It was hypothesized that both semantic processing and organizational activity are necessary for optimal free recall performance. In a series of three experiments, subjects were presented with a list of randomly selected nouns and were asked to make up a meaningful sentence for each noun. The subjects also rated the difficulty of using each noun. The subjects were instructed to try to remember words that were labeled "remember" words. For words that were labeled "story" words, the subjects were instructed only to make each sentence, using the word, part of an ongoing story which each subject was to make up. A test of retention for all presented words, using retention intervals of both 1 min and 24 h, showed that the story words were always recalled better than were the remember words. However, the amount of sequential organization was the same for both the story and the remember words. Recognition performance was found to be the same for both types of words. In addition, the story words were rated as being more difficult than the remember words. It was concluded that extensive semantic processing without organization is not sufficient for optimal recall.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号