首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
In this paper we examine the evidence for human brain areas dedicated to visual or auditory word form processing by comparing cortical activation for auditory word repetition, reading, picture naming, and environmental sound naming. Both reading and auditory word repetition activated left lateralised regions in the frontal operculum (Broca's area), posterior superior temporal gyrus (Wernicke's area), posterior inferior temporal cortex, and a region in the mid superior temporal sulcus relative to baseline conditions that controlled for sensory input and motor output processing. In addition, auditory word repetition increased activation in a lateral region of the left mid superior temporal gyrus but critically, this area is not specific to auditory word processing, it is also activated in response to environmental sounds. There were no reading specific activations, even in the areas previously claimed as visual word form areas: activations were either common to reading and auditory word repetition or common to reading and picture naming. We conclude that there is no current evidence for cortical sites dedicated to visual or auditory word form processing.  相似文献   

2.
When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of "hearing visual rhythms."  相似文献   

3.
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.  相似文献   

4.
The present study investigated modality-specific differences in processing of temporal information in the subsecond range. For this purpose, participants performed auditory and visual versions of a rhythm perception and three different duration discrimination tasks to allow for a direct, systematic comparison across both sensory modalities. Our findings clearly indicate higher temporal sensitivity in the auditory than in the visual domain irrespective of type of timing task. To further evaluate whether there is evidence for a common modality-independent timing mechanism or for multiple modality-specific mechanisms, we used structural equation modeling to test three different theoretical models. Neither a single modality-independent timing mechanism, nor two independent modality-specific timing mechanisms fitted the empirical data. Rather, the data are well described by a hierarchical model with modality-specific visual and auditory temporal processing at a first level and a modality-independent processing system at a second level of the hierarchy.  相似文献   

5.
本研究分别在时间和情绪认知维度上考察预先准备效应对情绪视听整合的影响。时间辨别任务(实验1)发现视觉引导显著慢于听觉引导,并且整合效应量为负值。情绪辨别任务(实验2)发现整合效应量为正值;在负性情绪整合中,听觉引导显著大于视觉引导;在正性情绪整合中,视觉引导显著大于听觉引导。研究表明,情绪视听整合基于情绪认知加工,而时间辨别会抑制整合;此外,跨通道预先准备效应和情绪预先准备效应都与引导通道有关。  相似文献   

6.
Rhythms and responses   总被引:4,自引:0,他引:4  
Rhythms are fundamental to behavior, but the control mechanism for timed responses is not known. Many theorists have assumed that there is a central clock coordinating behavior in all sensory modalities and response modes. We tested this hypothesis using a rhythmic tapping task in which university undergraduates first attempted to synchronize responses with brief auditory, tactile, or visual stimuli and then continued to tap at the same rate on their own. Performance was most variable with visual stimuli and least variable with auditory stimuli. The detailed results suggest that performances are not based on a common clock, but, rather, different strategies are employed when the task is presented in different modalities. We reject the hypothesis of a single timing mechanism as controlling behavior and, in doing so, question the validity of information processing models that are formulated without regard to temporal relations among their conjectured processes.  相似文献   

7.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

8.
Repeating temporal patterns were presented in the auditory and visual modalities so that: (a) all elements were of equal intensity and were equally spaced in time (uniform presentation); (b) the intensity of one element was increased (accent presentation); or (c) the interval between two elements was increased (pause presentation). Intensity and interval patterning serve to segment the element sequence into repeating patterns.

For uniform presentation, pattern organization was by pattern structure, with auditory identification being faster. For pause presentation, organization was by the pauses; both auditory and visual identification were twice as fast as for uniform presentation. For auditory accent presentation, organization was by pattern structure and identification was slower than for uniform presentation. In contrast, the organization of visual accent presentation was by accents and identification was faster than for uniform presentation. These results suggest that complex stimuli, in which elements are patterned along more than one sensory dimension, are perceptually unique and therefore their identification rests on the nature of each modality.  相似文献   

9.
Objects are central in visual, auditory, and tactual perception. But what counts as a perceptual object? I address this question via a structural unity schema, which specifies how a collection of parts must be arranged to compose an object for perception. On the theory I propose, perceptual objects are composed of parts that participate in causally sustained regularities. I argue that this theory falls out of a compelling account of the function of object perception, and illustrate its applications to multisensory perception. I also argue that the account avoids problems faced by standard views of visual and auditory objects.  相似文献   

10.
In older adults, difficulties processing complex auditory scenes, such as speech comprehension in noisy environments, might be due to a specific impairment of temporal processing at early, automatic processing stages involving auditory sensory memory (ASM). Even though age effects on auditory temporal processing have been well-documented, there is a paucity of research on how ASM processing of more complex tone-patterns is altered by age. In the current study, age effects on ASM processing of temporal and frequency aspects of two-tone patterns were investigated using a passive listening protocol. The P1 component, the mismatch negativity (MMN) and the P3a component of event-related brain potentials (ERPs) to tone frequency and temporal pattern deviants were recorded in younger and older adults as a measure of auditory event detection, ASM processing, and attention switching, respectively. MMN was elicited with smaller amplitude to both frequency and temporal deviants in older adults. Furthermore, P3a was elicited only in the younger adults. In conclusion, the smaller MMN amplitude indicates that automatic processing of both frequency and temporal aspects of two-tone patterns is impaired in older adults. The failure to initiate an attention switch, suggested by the absence of P3a, indicates that impaired ASM processing of patterns may lead to less distractibility in older adults. Our results suggest age-related changes in ASM processing of patterns that cannot be explained by an inhibitory deficit.  相似文献   

11.
Encoding sensory events entails processing of several physical attributes. Is the processing of any of these attributes a pre-requisite of conscious awareness? This selective review examines a recent set of behavioral and event-related potentials, studies conducted in patients with visual and auditory unilateral neglect or extinction, with the aim of establishing what aspects of initial processing are impaired in these patients. These studies suggest that extinguished visual stimuli excite the sensory cortices, but perhaps to a lesser degree than acknowledged stimuli do. However, encoding spatial attributes of auditory and visual stimuli appear to be preferentially impaired. In light of results from patients with other neuro-behavioral deficits, it is argued that egocentric spatial information is an essential pre-requisite for knowing that an external event occurred. In contrast, information handled by mostly domain-specific circuits, such as in the ventral temporal lobe, supports awareness of the identity of a stimulus, but not of its mere presence. Without spatial information, the stimulus identity will remain implicit.  相似文献   

12.
Visual system has been proposed to be divided into two, the ventral and dorsal, processing streams. The ventral pathway is thought to be involved in object identification whereas the dorsal pathway processes information regarding the spatial locations of objects and the spatial relationships among objects. Several studies on working memory (WM) processing have further suggested that there is a dissociable domain-dependent functional organization within the prefrontal cortex for processing of spatial and nonspatial visual information. Also the auditory system is proposed to be organized into two domain-specific processing streams, similar to that seen in the visual system. Recent studies on auditory WM have further suggested that maintenance of nonspatial and spatial auditory information activates a distributed neural network including temporal, parietal, and frontal regions but the magnitude of activation within these activated areas shows a different functional topography depending on the type of information being maintained. The dorsal prefrontal cortex, specifically an area of the superior frontal sulcus (SFS), has been shown to exhibit greater activity for spatial than for nonspatial auditory tasks. Conversely, ventral frontal regions have been shown to be more recruited by nonspatial than by spatial auditory tasks. It has also been shown that the magnitude of this dissociation is dependent on the cognitive operations required during WM processing. Moreover, there is evidence that within the nonspatial domain in the ventral prefrontal cortex, there is an across-modality dissociation during maintenance of visual and auditory information. Taken together, human neuroimaging results on both visual and auditory sensory systems support the idea that the prefrontal cortex is organized according to the type of information being maintained in WM.  相似文献   

13.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

14.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

15.
The authors examined potential mechanisms underlying motor coordination in children with developmental coordination disorder (DCD). Because children with DCD experience difficulty processing visual, auditory, and vibrotactile information, the authors explored patterns of choice reaction time (RT) in young (6-7 years) and older (9-10 years) children with and without DCD by using a compatibility-incompatibility paradigm and different sensory modalities. Young children responded more slowly than older children to visual, auditory, and vibrotactile stimuli. Children with DCD took longer than typical children to process visual and vibrotactile stimuli under more complex stimulus-response mappings. Young children with DCD responded more slowly than typical children to visual and vibrotactile information under incompatible conditions. Children with DCD responded faster than unaffected children to auditory stimuli. The results suggest that there is a developmental nature in the processing of visual and auditory input and imply that the vibrotactile sensory modality may be key to the motor coordination difficulties of children with DCD.  相似文献   

16.
Temporal processing ability in above average and average readers   总被引:3,自引:0,他引:3  
In the present study, we compared the rapid visual and auditory temporal processing ability of above average and average readers. One hundred five undergraduates participated in various visual and auditory temporal tasks. The above average readers exhibited lower auditory and visual temporal resolution thresholds than did the average readers, but only the differences in the auditory tasks were statistically significant, especially when nonverbal IQ was controlled for. Furthermore, both the correlation and stepwise multiple regression analyses revealed a relationship between the auditory measures and the wide range achievement test (WRAT) reading measure and a relationship between the auditory measures and a low spatial frequency visual measure and the WRAT spelling measure. Discriminant analysis showed that together both the visual and auditory measures correctly classified 75% of the subjects into above average and average reading groups, respectively. The results suggest that differences in temporal processing ability in relation to differences in reading proficiency are not confined to the comparison between poor and normal readers.  相似文献   

17.
Neuroanatomical evidence suggests that poor readers may have abnormal lateral (LGN) and medial (MGN) geniculate nuclei responsible for temporal processing in visual and auditory domains respectively (Livingstone & Galaburda, 1993). Although behavioral evidence does support this neuroanatomical evidence in that poor readers have performed poorly on visual and auditory tasks thought to require the utilization of the LGN and MGN, respectively, appropriate examination of the coexistence of these behavioral abnormalities in the same population of poor readers has yet to take place. The present study examined correlations between visual and auditory temporal processing scores of all readers (collapsed groups), good readers, and poor readers who were isolated into phonological and surface dyslexic subtypes. The same subjects and data from Cestnick and Coltheart (1999) and Cestnick and Jerger (2000) were used to run the analyses. Results demonstrated a multitude of correlations between these tasks for the phonological dyslexic group only. It is contended that cross-modality temporal processing deficits may exist in poor nonlexical (phonological dyslexics) as opposed to poor lexical (surface dyslexics) readers. It is conceivable that phonological dyslexics may also have deficiencies within the LGN and MGN, or perhaps within systems related to these nuclei. The precise cause of these processing patterns and correlations is still unknown.  相似文献   

18.
A Au  B Lovegrove 《Perception》2001,30(9):1127-1142
In the present study, the role of rapid visual and auditory temporal processing in reading irregular and nonsense words was investigated with a group of normal readers. One hundred and five undergraduates participated in various visual and auditory temporal-processing tasks. Readers who primarily adopted the phonological route in reading (nonsense-word readers) showed a trend for better auditory temporal resolution but readers who primarily adopted sight word skills (irregular-word readers) did not exhibit better visual temporal resolution. Both the correlation and stepwise multiple-regression analyses, however, revealed a relationship between visual temporal processing and irregular-word reading as well as a relationship between auditory temporal processing and nonsense-word reading. The results support the involvement of visual and auditory processing in reading irregular and nonsense words respectively, and were discussed with respect to recent findings that only dyslexics with phonological impairment will display temporal deficits. Further, the temporal measures were not effective discriminants for the reading groups, suggesting a lack of association between reading ability and the choice of reading strategy.  相似文献   

19.
Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10 s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.  相似文献   

20.
It has been proposed that temporal perception and performance depend on a biological source of temporal information. A model for a temporal oscillator put forward by Treisman, Faulkner, Naish, and Brogan (1990) predicted that if intense sensory pulses (such as auditory clicks) were presented to subjects at suitable rates they would perturb the frequency at which the temporal oscillator runs and so cause over- or underestimation of time. The resulting pattern of interference between sensory pulse rates and time judgments would depend on the frequency of the temporal oscillator and so might allow that frequency to be estimated. Such interference patterns were found using auditory clicks and visual flicker (Treisman & Brogan, 1992; Treisman et al., 1990). The present study examines time estimation together with the simultaneously recorded electroencephalogram to examine whether evidence of such an interference pattern can be found in the EEG.

Alternative models for the organization of a temporal system consisting of an oscillator or multiple oscillators are considered and predictions derived from them relating to the EEG. An experiment was run in which time intervals were presented for estimation, auditory clicks being given during those intervals, and the EEG was recorded concurrently. Analyses of the EEG revealed interactions between auditory click rates and certain EEG components which parallel the interference patterns previously found. The overall pattern of EEG results is interpreted as favouring a model for the organization of the temporal system in which sets of click-sensitive oscillators spaced at intervals of about 12.8 Hz contribute to the EEG spectrum. These are taken to represent a series of harmonically spaced distributions of oscillators involved in time-keeping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号