首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were involved in information rehearsal as well as programming of overt actions. Two cortical regions (parietal and prefrontal) performed processing (retrieval and representational change) independent of input and output modality. The final two regions (anterior cingulate and caudate) were involved in control of cognition independent of modality of input or output and content of the material. An information-processing model, based on the ACT-R theory, is described that predicts the BOLD response in these regions. Different modules in the theory vary in the degree to which they are modality-specific and the degree to which they are involved in central versus peripheral cognitive processes.  相似文献   

2.
A quantitative meta-analysis was performed on 47 neuroimaging studies involving tasks purported to require the resolution of interference. The tasks included the Stroop, flanker, go/no-go, stimulus-response compatibility, Simon, and stop signal tasks. Peak density-based analyses of these combined tasks reveal that the anterior cingulate cortex, dorsolateral prefrontal cortex, inferior frontal gyrus, posterior parietal cortex, and anterior insula may be important sites for the detection and/or resolution of interference. Individual task analyses reveal differential patterns of activation among the tasks. We propose that the drawing of distinctions among the processing stages at which interference may be resolved may explain regional activation differences. Our analyses suggest that resolution processes acting upon stimulus encoding, response selection, and response execution may recruit different neural regions.  相似文献   

3.
ObjectivesDistracted walking is a major cause of pedestrian road traffic injuries, but little is known about how distraction affects pedestrian safety. The study was designed to explore how visual and auditory distraction might influence pedestrian safety.MethodsThree experiments were conducted to explore causal mechanisms from two theoretical perspectives, increased cognitive load from the distraction task and resource competition in the same sensory modality. Pedestrians’ behavior patterns and cortex oxyhemoglobin changes were recorded while they performed a series of dual tasks.ResultsFour primary results emerged: (a) participants responded more slowly to both visual and auditory stimuli in traffic, as well as walked more slowly, while talking on the phone or text messaging compared to when undistracted or listening to music; (b) when participants completed pedestrian response tasks while distracted with a high cognitive load, their response was significantly slower and poorer than when they carried out a lower cognitive load distraction task, (c) participants had higher levels of oxy-Hb change in cortices related to visual processing and executive function while distracted with a higher cognitive load; and (d) participants' responses to traffic lights were slower and resulted in a higher activation in prefrontal cortex and occipital areas when distracted by a visual distraction task compared to when distracted with an auditory task; similarly, brain activation increased significantly in temporal areas when participants responded to an auditory car horn task compared to when they responded to visual traffic lights.ConclusionsBoth distracting cognitive load demands and the type of distraction task significantly affect young adult pedestrian performance and threaten pedestrian safety. Pedestrian injury prevention efforts should consider the effects of the type of distracting task and its cognitive demands on pedestrian safety.  相似文献   

4.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

5.
The reciprocal connections between emotion and attention are vital for adaptive behaviour. Previous results demonstrated that the behavioural effects of emotional stimuli on performance are attenuated when executive control is recruited. The current research studied whether this attenuation is modality dependent. In two experiments, negative and neutral pictures were presented shortly before a visual, tactile, or auditory target in a Simon task. All three modalities demonstrated a Simon effect, a conflict adaptation effect, and an emotional interference effect. However, the interaction between picture valence and Simon congruency was found only in the visual task. Specifically, when the Simon target was visual, emotional interference was reduced during incongruent compared to congruent trials. These findings suggest that although the control-related effects observed in the Simon tasks are not modality dependent, the link between emotion and executive control is modality dependent. Presumably, this link occurs only when the emotional stimulus and the target are presented in the same modality.  相似文献   

6.
Although task switching is often considered one of the fundamental abilities underlying executive functioning and general intelligence, there is little evidence that switching is a unitary construct and little evidence regarding the relationship between brain activity and switching performance. We examined individual differences in multiple types of attention shifting in order to determine whether behavioral performance and fMRI activity are correlated across different types of shifting. The participants (n=39) switched between objects and attributes both when stimuli were perceptually available (external) and when stimuli were stored in memory (internal). We found that there were more switchrelated activations in many regions associated with executive control—including the dorsolateral and medial prefrontal and parietal cortices—when behavioral switch costs were higher (poor performance). Conversely, activation in the ventromedial prefrontal cortex (VMPFC) and the rostral anterior cingulate was consistently correlated with good performance, suggesting a general role for these areas in efficient attention shifting. We discuss these findings in terms of a model of cognitive-emotional interaction in attention shifting, in which reward-related signals in the VMPFC guide efficient selection of tasks in the lateral prefrontal and parietal cortices.  相似文献   

7.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

8.
Sandhu R  Dyson BJ 《Acta psychologica》2012,140(2):111-118
Competition between the senses can lead to modality dominance, where one sense influences multi-modal processing to a greater degree than another. Modality dominance can be influenced by task demands, speeds of processing, contextual influence and practice. To resolve previous discrepancies in these factors, we assessed modality dominance in an audio-visual paradigm controlling for the first three factors while manipulating the fourth. Following a uni-modal task in which auditory and visual processing were equated, participants completed a pre-practice selective attention bimodal task in which the congruency relationship and task-relevant modality changed across trials. Participants were given practice in one modality prior to completing a post-practice selective attention bimodal task similar to the first. The effects of practice were non-specific as participants were speeded post-practice relative to pre-practice. Congruent stimuli relative to incongruent stimuli, also led to increased processing efficiency. RT data tended to reveal symmetric modality switching costs whereas the error rate data tended to reveal asymmetric modality switching costs in which switching from auditory to visual processing was particularly costly. The data suggest that when a number of safeguards are put in place to equate auditory and visual responding as far as possible, evidence for an auditory advantage can arise.  相似文献   

9.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

10.
Visual system has been proposed to be divided into two, the ventral and dorsal, processing streams. The ventral pathway is thought to be involved in object identification whereas the dorsal pathway processes information regarding the spatial locations of objects and the spatial relationships among objects. Several studies on working memory (WM) processing have further suggested that there is a dissociable domain-dependent functional organization within the prefrontal cortex for processing of spatial and nonspatial visual information. Also the auditory system is proposed to be organized into two domain-specific processing streams, similar to that seen in the visual system. Recent studies on auditory WM have further suggested that maintenance of nonspatial and spatial auditory information activates a distributed neural network including temporal, parietal, and frontal regions but the magnitude of activation within these activated areas shows a different functional topography depending on the type of information being maintained. The dorsal prefrontal cortex, specifically an area of the superior frontal sulcus (SFS), has been shown to exhibit greater activity for spatial than for nonspatial auditory tasks. Conversely, ventral frontal regions have been shown to be more recruited by nonspatial than by spatial auditory tasks. It has also been shown that the magnitude of this dissociation is dependent on the cognitive operations required during WM processing. Moreover, there is evidence that within the nonspatial domain in the ventral prefrontal cortex, there is an across-modality dissociation during maintenance of visual and auditory information. Taken together, human neuroimaging results on both visual and auditory sensory systems support the idea that the prefrontal cortex is organized according to the type of information being maintained in WM.  相似文献   

11.
Load theory predictions for the effects of task coordination between and within sensory modalities (vision and hearing or vision only) on the level of distraction were tested. Response competition effects in a visual flanker task when it was coordinated with an auditory discrimination task (between-modality conditions) or a visual discrimination task (within-modality conditions) were compared with single-task conditions. In the between-modality conditions, response competition effects were greater in the two- (vs. single-) task conditions irrespective of the level of discrimination task difficulty. In the within-modality conditions, response competition effects were greater in the two-task (vs. single-task) conditions only when these involved a more difficult visual discrimination task. The results provided support for the load theory prediction that executive control load leads to greater distractor interference while highlighting the effects of task modality.  相似文献   

12.
The posterior parietal cortex has been traditionally associated with coordinate transformations necessary for interaction with the environment and with visual-spatial attention. More recently, involvement of posterior parietal cortex in other cognitive functions such as working memory and task learning has become evident. Neurophysiological experiments in non-human primates and human imaging studies have revealed neural correlates of memory and learning at the single neuron and at the brain network level. During working memory, posterior parietal neurons continue to discharge and to represent stimuli that are no longer present. This activation resembles the responses of prefrontal neurons, although important differences have been identified in terms of the ability to resist stimulation by distracting stimuli, which is more evident in the prefrontal than the posterior parietal cortex. Posterior parietal neurons also become active during tasks that require the organization of information into larger structured elements and their activity is modulated according to learned context-dependent rules. Neural correlates of learning can be observed in the mean discharge rate and spectral power of neuronal spike trains after training to perform new task sets or rules. These findings demonstrate the importance of posterior parietal cortex in brain networks mediating working memory and learning.  相似文献   

13.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9-12 years) and adults (21-31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

14.
Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification choice, visual feedback was presented. The average hemodynamic response was calculated across the eighteen subjects to identify the separate networks associated with both classification and feedback. Random-effects analyses identified the different networks implicated during the classification and feedback phases of each trial. The regions included prefrontal cortex, frontal eye fields, supplementary motor and eye fields, thalamus, caudate, superior and inferior parietal lobules, and areas within visual cortex. The differences between classification and feedback were identified as (i) overall higher volumes and signal intensities during classification as compared to feedback, (ii) involvement of the thalamus and superior parietal regions during the classification phase of each trial, and (iii) differential involvement of the caudate head during feedback. The effects of learning were then evaluated for both classification and feedback. Early in learning, subjects showed increased activation in the hippocampal regions during classification and activation in the heads of the caudate nuclei during the corresponding feedback phases. The findings suggest that early stages of prototype-distortion learning are characterized by networks previously associated with strategies of explicit memory and hypothesis testing. However as learning progresses the networks change. This finding suggests that the cognitive strategies also change during prototype-distortion learning.  相似文献   

15.
Evidence from go/no-go performance on the Eriksen flanker task with manual responding suggests that individuals gaze at stimuli just as long as needed to identify them (e.g., Sanders, 1998). In contrast, evidence from dual-task performance with vocal responding suggests that gaze shifts occur after response selection (e.g., Roelofs, 2008a). This difference in results may be due to the nature of the task situation (go/no-go vs. dual task) or the response modality (manual vs. vocal). We examined this by having participants vocally respond to congruent and incongruent flanker stimuli and shift gaze to left- or right-pointing arrows. The arrows required a manual response (dual task) or determined whether the vocal response to the flanker stimuli had to be given or not (go/no-go). Vocal response and gaze shift latencies were longer on incongruent than congruent trials in both dual-task and go/no-go performance. The flanker effect was also present in the manual response latencies in dual-task performance. Ex-Gaussian analyses revealed that the flanker effect on the gaze shifts consisted of a shift of the entire latency distribution. These results suggest that gaze shifts occur after response selection in both dual-task and go/no-go performance with vocal responding.  相似文献   

16.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

17.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

18.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9–12 years) and adults (21–31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

19.
The exact roles of the medial prefrontal cortex (mPFC) in conditional choice behavior are unknown and a visual contextual response selection task was used for examining the issue. Inactivation of the mPFC severely disrupted performance in the task. mPFC inactivations, however, did not disrupt the capability of perceptual discrimination for visual stimuli. Normal response selection was also observed when nonvisual cues were used as conditional stimuli. The results strongly suggest that the mPFC is not necessarily involved in the inhibition of response or flexible response selection in general, but is rather critical when response selection is required conditionally using visual context in the background.  相似文献   

20.
Negative priming with auditory as well as with visual stimuli has been shown to involve the retrieval of prime response information as evidenced by an increase of prime response errors to the probes of ignored repetition trials compared to control trials. We investigated whether prime response retrieval processes were also present for response modalities other than manual responding. In an auditory four alternative forced choice task participants either vocally or manually identified a target sound while ignoring a distractor sound. Negative priming was of equal size in both response modalities. What is more, for both response modalities, there was evidence of increased prime response errors in ignored repetition trials compared to control trials. The findings suggest that retrieval of event files of the prime episode including prime response information is a general mechanism underlying the negative priming phenomenon irrespective of stimulus or response modality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号