首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study examines the transcoding of word structures in the spoken linguistic sign system into the corresponding patterns of the written linguistic sign system and vice versa. Of a group of aphasic patients including some who had impairments of auditory word comprehension, of visual word comprehension, or of both, it is shown that the frequency of successful transcoding was extremely high, approximately 85%. This success is due to transcoding processes on the phoneme-grapheme level rather than on the semantic level of language. A small group of patients who failed the auditory-visual matching tests provide evidence of a separate transcoding deficit.  相似文献   

2.
A patient with alexia and agraphia had intact spelling and comprehension of spelled words and used a letter-naming strategy to read and write. We propose that there is a graphemic area important for distinguishing graphemic features and for programming movements used in writing. In this patient this area was not functioning or did not have access to the area of visual word imagés. Therefore, he used an ideographic letter-naming strategy to verbally circumvent his disability and gain access to the area of visual word images.  相似文献   

3.
When a disorder of single word writing is seen in conjunction with preserved oral spelling and intact written grapheme formation ("written spelling agraphia"), the deficit may be presumed to lie somewhere between letter choice and written motor output. Two potential deficits are considered: (1) visual letter codes are not activated properly and (2) the information contained within properly activated visual letter codes fails to reach intact graphic motor patterns. Two patients with written spelling agraphia were each given a series of tests aimed at distinguishing between the two possibilities. Results suggest that the written spelling agraphia seen in these two patients arises from different underlying deficits. The results are discussed in terms of the patients' CT scan lesion sites, which were markedly different.  相似文献   

4.
The goal of this experiment was to investigate the role of visual feedback during written composition. Effects of suppression of visual feedback were analyzed both on processing demands and on on‐line coordination of low‐level execution processes and of high‐level conceptual and linguistic processes. Writers composed a text and copied it either with or without visual feedback. Processing demands of the writing processes were evaluated with reaction times to secondary auditory probes, which were analyzed according to whether participants were handwriting (in a composing and a copying task) or engaged in high‐level processes (when pausing in a composing task). Suppression of visual feedback increased reaction time interference (secondary reaction time minus baseline reaction time) during handwriting in the copying task and not during pauses in the composing task. This suggests that suppression of visual feedback only affected processing demands of execution processes and not those of high‐level conceptual and linguistic processes. This is confirmed by analysis of the quality of the texts produced by participants, which were little, if at all, affected by the suppression of visual feedback. Results also indicate that the increase in processing demands of execution related to suppression of visual feedback affected on‐line coordination of the writing processes. Indeed, when visual feedback was suppressed, reaction time interferences associated with handwriting were not reliably different in the copying task or the composing task but were significantly different when visual feedback was not suppressed: They were lower in the copying task than in the composition task. When visual feedback was suppressed, writers activated step‐by‐step execution processes and high‐level writing processes, whereas they concurrently activated these writing processes when composing with visual feedback.  相似文献   

5.
Alexia without agraphia, or "pure" alexia, is an acquired impairment in reading that leaves writing skills intact. Repetition priming for visually presented words is diminished in pure alexia. However, it is not possible to verify whether this priming deficit is modality-specific or modality independent because reading abilities are compromised. Hence, auditory repetition priming was assessed with lexical decision and word stem completion tasks in pure alexic patients with lesions in left inferior temporal-occipital cortex and the splenium. Perceptually based, modality-specific priming models predict intact auditory priming, since auditory association cortex is spared in the patients. Alternatively, modality-independent models, which suggest that priming reflects the temporary modification of an amodal system, might predict impairments. Baseline performance was matched in the patients and controls, although lexical decision priming measures showed an interaction between group and repetition lag. The patients showed intact immediate priming but significantly less priming than controls at longer delays. Furthermore, word stem completion priming was abolished in the patients. One explanation for the deficit is that left inferior temporal-occipital cortex supports amodal aspects of priming, as suggested by recent neuroimaging results. Another possibility is that long-term auditory priming relies on covert orthographic representations which were unavailable in the patients. The results provide support for interactive models of word identification.  相似文献   

6.
A 52-year-old man with atypical cerebral dominance (left-handed for writing but mixed handedness for other tasks) suffered an extensive right hemisphere stroke, resulting in a combination of deficits that has not been previously reported. There were profound visual constructive and visual perceptual disturbances and a spatial agraphia, which were consistent with a nondominant hemisphere lesion. There was also a severe apraxic agraphia, which is typically associated with a dominant hemisphere lesion, but no other signs of dominant hemisphere dysfunction such as linguistic disturbance or limb-motor apraxia were present. This case serves to highlight the functional and anatomical relationship between handwriting and other forms of praxis; the various sources of error in letter formation; the need to be specific in labeling and describing agraphias ; and the role of a detailed analysis of writing errors in delineating the neuropsychological processes involved in handwriting.  相似文献   

7.
Inconsistency in the spelling-to-sound mapping hurts visual word perception and reading aloud (i.e., the traditional consistency effect). In the present experiment, we found a consistency effect in auditory word perception: Words with phonological rimes that could be spelled in multiple ways produced longer auditory lexical decision latencies and more errors than did words with rimes that could be spelled only one way. This finding adds strong support to the claim that orthography affects the perception of spoken words. This effect was predicted by a model that assumes a coupling between orthography and phonology that is functional in both visual and auditory word perception.  相似文献   

8.
In this study, we examined whether good auditory and good visual temporal processors were better than their poor counterparts on certain reading measures. Various visual and auditory temporal tasks were administered to 105 undergraduates. They read some phonologically regular pseudowords and irregular words that were presented sequentially in the same ("word" condition) and in different ("line" condition) locations. Results indicated that auditory temporal acuity was more relevant to reading, whereas visual temporal acuity was more relevant to spelling. Good auditory temporal processors did not have the advantage in processing pseudowords, even though pseudoword reading correlated significantly with auditory temporal processing. These results suggested that some higher cognitive or phonological processes mediated the relationship between auditory temporal processing and pseudoword reading. Good visual temporal processors did not have the advantage in processing irregular words. They also did not process the line condition more accurately than the word condition. The discrepancy might be attributed to the use of normal adults and the unnatural reading situation that did not fully capture the function of the visual temporal processes. The distributions of auditory and visual temporal processing abilities were co-occurring to some degree, but they maintained considerable independence. There was also a lack of a relationship between the type and severity of reading deficits and the type and number of temporal deficits.  相似文献   

9.
Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.  相似文献   

10.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

11.
Previous studies have shown that object properties are processed faster when they follow properties from the same perceptual modality than properties from different modalities. These findings suggest that language activates sensorimotor processes, which, according to those studies, can only be explained by a modal account of cognition. The current paper shows how a statistical linguistic approach of word co-occurrences can also reliably predict the category of perceptual modality a word belongs to (auditory, olfactory-gustatory, visual-haptic), even though the statistical linguistic approach is less precise than the modal approach (auditory, gustatory, haptic, olfactory, visual). Moreover, the statistical linguistic approach is compared with the modal embodied approach in an experiment in which participants verify properties that share or shift modalities. Response times suggest that fast responses can best be explained by the linguistic account, whereas slower responses can best be explained by the embodied account. These results provide further evidence for the theory that conceptual processing is both linguistic and embodied, whereby less precise linguistic processes precede precise simulation processes.  相似文献   

12.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

13.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

14.
Investigations of neurodegenerative disorders may reveal functional relationships in the cognitive system. C.S. was a 63‐year‐old right‐handed man with post‐mortem confirmed Pick's disease with a range of progressive impairments including non‐fluent aphasia, speech, limb, oculomotor, and buccofacial apraxia, but mostly intact intelligence, perception, orientation, memory, semantics, and phonology. During progression, agrammatism in writing with impairments in syntactic comprehension emerged in parallel with an unusual graphomotor deficit in drawing and writing, with an increasing deterioration of graphic short‐term memory. We investigated C.S.'s graphomotor deficit longitudinally using tests of writing and drawing on letters, words, and sentences and drawing to command and copying. We also tested C.S.'s short‐term graphemic buffer experimentally. Analysis showed deficits on selective aspects of graphomotor implementation of writing and drawing, mainly affecting the production of circles and curves, but not short straight lines in drawing and writing, and graphomotor short‐term memory, which paralleled impairments of written syntax and syntactic comprehension. We believe this to be the first detailed analysis of such an unusual progressive impairment in graphomotor production, which may be related to problems with agrammatic agraphia and impairments affecting shared components of cognition reflecting damage to shared neural networks. Alternatively, they may simply reflect the effects of coincidental damage to separate mechanisms responsible for aspects of writing, drawing, and syntactic processing. Longitudinal investigations of emerging deficits in progressive conditions like C.S.'s provides an opportunity to examine the progressive emergence of symptoms in an individual with multiple progressive impairments as they appear and examine putative relationships between them.  相似文献   

15.
Visual information provided by a talker’s mouth movements can influence the perception of certain speech features. Thus, the “McGurk effect” shows that when the syllable /bi/ is presented audibly, in synchrony with the syllable /gi/, as it is presented visually, a person perceives the talker as saying /di/. Moreover, studies have shown that interactions occur between place and voicing features in phonetic perception, when information is presented audibly. In our first experiment, we asked whether feature interactions occur when place information is specified by a combination of auditory and visual information. Members of an auditory continuum ranging from /ibi/ to /ipi/ were paired with a video display of a talker saying /igi/. The auditory tokens were heard as ranging from /ibi/ to /ipi/, but the auditory-visual tokens were perceived as ranging from /idi/ to /iti/. The results demonstrated that the voicing boundary for the auditory-visual tokens was located at a significantly longer VOT value than the voicing boundary for the auditory continuum presented without the visual information. These results demonstrate that place-voice interactions are not limited to situations in which place information is specified audibly. In three follow-up experiments, we show that (1) the voicing boundary is not shifted in the absence of a change in the global percept, even when discrepant auditory-visual information is presented; (2) the number of response alternatives provided for the subjects does not affect the categorization or the VOT boundary of the auditory-visual stimuli; and (3) the original effect of a VOT boundary shift is not replicated when subjects are forced by instruction to \ldrelabel\rd the /b-p/auditory stimuli as/d/or/t/. The subjects successfully relabeled the stimuli, but no shift in the VOT boundary was observed.  相似文献   

16.
Copying text may seem trivial, but the task itself is psychologically complex. It involves a series of sequential visual and cognitive processes, which must be co-ordinated; these include visual encoding, mental representation and written production. To investigate the time course of word processing during copying, we recorded eye movements of adults and children as they hand-copied isolated words presented on a classroom board. Longer and lower frequency words extended adults' encoding durations, suggesting whole word encoding. Only children's short word encoding was extended by lower frequency. Though children spent more time encoding long words compared to short words, gaze durations for long words were extended similarly for high- and low-frequency words. This suggested that for long words children used partial word representations and encoded multiple sublexical units rather than single whole words. Piecemeal word representation underpinned copying longer words in children, but reliance on partial word representations was not shown in adult readers.  相似文献   

17.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

18.
Fukui T  Lee E 《Brain and language》2008,104(3):201-210
By investigating three patients with progressive agraphia, we explored the possibility that this entity is an early sign of degenerative dementia. Initially, these patients complained primarily of difficulties writing Kanji (Japanese morphograms) while other language and cognitive impairments were relatively milder. Impairments in writing Kana (Japanese syllabograms), verbal language, executive function, visuo- and visuospatial cognition and memory were identified by neuropsychological testing. The agraphia was compatible with a peripheral type, based on deficits at the interface between the central letter selection and the graphemic motor execution (Patient 1) or at the stage of central letter selection as well (Patients 2 and 3). Agraphia was generally more prominent, although not exclusive, for Kanji probably because of later acquisition and larger total number of Kanji letters leading to lower frequency of use and familiarity per letter. Concurrent or subsequent emergence of non-fluent aphasia, ideomotor apraxia, executive dysfunction and asymmetric akinetic-rigid syndrome in two patients suggested degenerative processes involving the parietal-occipital-temporal regions, basal ganglia and striato-frontal projections. We propose that progressive agraphia may be one of the early symptoms of degenerative dementia such as corticobasal degeneration.  相似文献   

19.
Phonemic restoration is a powerful auditory illusion that arises when a phoneme is removed from a word and replaced with noise, resulting in a percept that sounds like the intact word with a spurious bit of noise. It is hypothesized that the configurational properties of the word impair attention to the individual phonemes and thereby induce perceptual restoration of the missing phoneme. If so, this impairment might be unlearned if listeners can process individual phonemes within a word selectively. Subjects received training with the potentially restorable stimuli (972 trials with feedback); in addition, the presence or absence of an attentional cue, contained in a visual prime preceding each trial, was varied between groups of subjects. Cuing the identity and location of the critical phoneme of each test word allowed subjects to attend to the critical phoneme, thereby inhibiting the illusion, but only when the prime also identified the test word itself. When the prime provided only the identity or location of the critical phoneme, or only the identity of the word, subjects performed identically to those subjects for whom the prime contained no information at all about the test word. Furthermore, training did not produce any generalized learning about the types of stimuli used. A limited interactive model of auditory word perception is discussed in which attention operates through the lexical level.  相似文献   

20.
In this paper we examine the evidence for human brain areas dedicated to visual or auditory word form processing by comparing cortical activation for auditory word repetition, reading, picture naming, and environmental sound naming. Both reading and auditory word repetition activated left lateralised regions in the frontal operculum (Broca's area), posterior superior temporal gyrus (Wernicke's area), posterior inferior temporal cortex, and a region in the mid superior temporal sulcus relative to baseline conditions that controlled for sensory input and motor output processing. In addition, auditory word repetition increased activation in a lateral region of the left mid superior temporal gyrus but critically, this area is not specific to auditory word processing, it is also activated in response to environmental sounds. There were no reading specific activations, even in the areas previously claimed as visual word form areas: activations were either common to reading and auditory word repetition or common to reading and picture naming. We conclude that there is no current evidence for cortical sites dedicated to visual or auditory word form processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号