首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order of two successively presented visual stimuli. However, their sensitivity to visual temporal order improved as in the control group when two accessory sounds were added (temporal ventriloquism). These findings indicate that individuals with schizophrenia have diminished sensitivity to visual temporal order, but no deficits in the integration of low-level auditory and visual information.  相似文献   

2.
The coherent experience of the self and the world depends on the ability to integrate vs. segregate sensory information. Optimal temporal integration between the senses is mediated by oscillatory properties of neural activity. Previous research showed reduced temporal sensitivity to multisensory events in schizotypy, a personality trait linked to schizophrenia. Here we used the tactile-induced Double-Flash-Illusion (tDFI) to investigate the tactile-to-visual temporal sensitivity in schizotypy, as indexed by the temporal window of illusion (TWI) and its neural underpinnings. We measured EEG oscillations within the beta band, recently shown to correlate with the tDFI. We found individuals with higher schizotypal traits to have wider TWI and slower beta waves accounting for the temporal window within which they perceive the illusion. Our results indicate reduced tactile-to-visual temporal sensitivity to mediate the effect of slowed oscillatory beta activity on schizotypal personality traits. We conclude that slowed oscillatory patterns might constitute an early marker for psychosis proneness.  相似文献   

3.
Using functional magnetic resonance imaging (fMRI), we investigated brain activity evoked by mutual and averted gaze in a compelling and commonly experienced social encounter. Through virtual-reality goggles, subjects viewed a man who walked toward them and shifted his neutral gaze either toward (mutual gaze) or away (averted gaze) from them. Robust activity was evoked in the superior temporal sulcus (STS) and fusiform gyrus (FFG). For both conditions, STS activity was strongly right lateralized. Mutual gaze evoked greater activity in the STS than did averted gaze, whereas the FFG responded equivalently to mutual and averted gaze. Thus, we show that the STS is involved in processing social information conveyed by shifts in gaze within an overtly social context. This study extends understanding of the role of the STS in social cognition and social perception by demonstrating that it is highly sensitive to the context in which a human action occurs.  相似文献   

4.
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in this study by applying a “sound-morphing” paradigm, where the presence of phonetic features were parametrically varied, creating a step-wise transition from a non-phonetic sound into a phonetic sound. The stimuli were presented in an event-related fMRI design. The fMRI-BOLD data were analysed using parametric contrasts. The results showed a higher sensitivity for sounds containing phonetic features compared to non-phonetic sounds in the middle part of STG, and in the anterior part of the planum temporale (PT) bilaterally. Although the same areas were involved in the processing of non-phonetic sounds, a difference in activation was evident in the STG, with an increase in activation related to increment of phonetic features in the sounds. The results indicate a stimulus-driven, bottom-up process that utilizes general auditory resources in the secondary auditory cortex, depending on specific phonetic features in the sounds.  相似文献   

5.
Numerous studies implicate superior temporal sulcus (STS) in the perception of human movement. More recent theories hold that STS is also involved in the understanding of human movement. However, almost no studies to date have associated STS function with observable variability in action understanding. The present study directly associated STS activity with performance on a challenging task requiring the interpretation of human movement. During functional MRI scanning, fourteen adults were asked to identify the direction (left or right) in which either a point-light walking figure or spinning wheel were moving. The task was made challenging by perturbing the dot trajectories to a level (determined via pretesting) where each participant achieved 72% accuracy. The walking figure condition was associated with increased activity in a constellation of social information processing and biological motion areas, including STS, MT+/V5, right pars opercularis (inferior frontal gyrus), fusiform gyrus, and amygdala. Correctly answered walking figure trials were uniquely associated with increased activity in two right hemisphere STS clusters and right amygdala. Present findings provide some of the strongest evidence to date that STS plays a critical role in the successful interpretation of human movement.  相似文献   

6.
When viewing a portrait, we are often captured by its expressivity, even if the emotion depicted is not immediately identifiable. If the neural mechanisms underlying emotion processing of real faces have been largely clarified, we still know little about the neural basis of evaluation of (emotional) expressivity in portraits. In this study, we aimed at assessing—by means of transcranial magnetic stimulation (TMS)—whether the right superior temporal sulcus (STS) and the right somatosensory cortex (SC), that are important in discriminating facial emotion expressions, are also causally involved in the evaluation of expressivity of portraits. We found that interfering via TMS with activity in (the face region of) right STS significantly reduced the extent to which portraits (but not other paintings depicting human figures with faces only in the background) were perceived as expressive, without, though, affecting their liking. In turn, interfering with activity of the right SC had no impact on evaluating either expressivity or liking of either paintings’ category. Our findings suggest that evaluation of emotional cues in artworks recruit (at least partially) the same neural mechanisms involved in processing genuine biological others. Moreover, they shed light on the neural basis of liking decisions in art by art-naïve people, supporting the view that aesthetic appreciation relies on a multitude of factors beyond emotional evaluation.  相似文献   

7.
The perception and processing of temporal information are tasks the brain must continuously perform. These include measuring the duration of stimuli, storing duration information in memory, recalling such memories, and comparing two durations. How the brain accomplishes these tasks, however, is still open for debate. The temporal bisection task, which requires subjects to compare temporal stimuli to durations held in memory, is perfectly suited to address these questions. Here we perform a meta-analysis of human performance on the temporal bisection task collected from 148 experiments spread across 18 independent studies. With this expanded data set we are able to show that human performance on this task contains a number of significant peculiarities, which in total no single model yet proposed has been able to explain. Here we present a simple 2-step decision model that is capable of explaining all the idiosyncrasies seen in the data.  相似文献   

8.
This study investigates the temporal resolution capacities of the central-auditory system in a subject (NP) suffering from repetition conduction aphasia. More specifically, the patient was asked to detect brief gaps between two stretches of broadband noise (gap detection task) and to evaluate the duration of two biphasic (WN-3) continuous noise elements, starting with white noise (WN) followed by 3 kHz bandpass-filtered noise (duration discrimination task). During the gap detection task, the two portions of each stimulus were either identical (“intra-channel condition”) or differed (“inter-channel condition”) in the spectral characteristics of the leading and trailing acoustic segments. NP did not exhibit any deficits in the intra-channel condition of the gap detection task, indicating intact auditory temporal resolution across intervals of 1–3 ms. By contrast, the inter-channel condition yielded increased threshold values. Based upon the “multiple-looks” model of central-auditory processing, this profile points at a defective integration window operating across a few tens of milliseconds – a temporal range associated with critical features of the acoustic speech signal such as voice onset time and formant transitions. Additionally, NP was found impaired during a duration discrimination task addressing longer integration windows (ca. 150 ms). Concerning speech, this latter time domain approximately corresponds to the duration of stationary segmental units such as fricatives and long vowels. On the basis of our results we suggest, that the patient’s auditory timing deficits in non-speech tasks may account, at least partially, for his impairments in speech processing.  相似文献   

9.
Stimulus contrast and duration effects on visual temporal integration and order judgment were examined in a unified paradigm. Stimulus onset asynchrony was governed by the duration of the first stimulus in Experiment 1, and by the interstimulus interval in Experiment 2. In Experiment 1, integration and order uncertainty increased when a low contrast stimulus followed a high contrast stimulus, but only when the second stimulus was 20 or 30 ms. At 10 ms duration of the second stimulus, integration and uncertainty decreased. Temporal order judgments at all durations of the second stimulus were better for a low contrast stimulus following a high contrast one. By contrast, in Experiment 2, a low contrast stimulus following a high contrast stimulus consistently produced higher integration rates, order uncertainty, and lower order accuracy. Contrast and duration thus interacted, breaking correspondence between integration and order perception. The results are interpreted in a tentative conceptual framework.  相似文献   

10.
Associating crossmodal auditory and visual stimuli is an important component of perception, with the posterior superior temporal sulcus (pSTS) hypothesized to support this. However, recent evidence has argued that the pSTS serves to associate two stimuli irrespective of modality. To examine the contribution of pSTS to crossmodal recognition, participants (N = 13) learned 12 abstract, non-linguistic pairs of stimuli over 3 weeks. These paired associates comprised four types: auditory–visual (AV), auditory–auditory (AA), visual–auditory (VA), and visual–visual (VV). At week four, participants were scanned using magnetoencephalography (MEG) while performing a correct/incorrect judgment on pairs of items. Using an implementation of synthetic aperture magnetometry that computes real statistics across trials (SAMspm), we directly contrasted crossmodal (AV and VA) with unimodal (AA and VV) pairs from stimulus-onset to 2 s in theta (4–8 Hz), alpha (9–15 Hz), beta (16–30 Hz), and gamma (31–50 Hz) frequencies. We found pSTS showed greater desynchronization in the beta frequency for crossmodal compared with unimodal trials, suggesting greater activity during the crossmodal pairs, which was not influenced by congruency of the paired stimuli. Using a sliding window SAM analysis, we found the timing of this difference began in a window from 250 to 750 ms after stimulus-onset. Further, when we directly contrasted all sub-types of paired associates from stimulus-onset to 2 s, we found that pSTS seemed to respond to dynamic, auditory stimuli, rather than crossmodal stimuli per se. These findings support an early role for pSTS in the processing of dynamic, auditory stimuli, and do not support claims that pSTS is responsible for associating two stimuli irrespective of their modality.  相似文献   

11.
PurposeAdults who stutter speak more fluently during choral speech contexts than they do during solo speech contexts. The underlying mechanisms for this effect remain unclear, however. In this study, we examined the extent to which the choral speech effect depended on presentation of intact temporal speech cues. We also examined whether speakers who stutter followed choral signals more closely than typical speakers did.Method8 adults who stuttered and 8 adults who did not stutter read 60 sentences aloud during a solo speaking condition and three choral speaking conditions (240 total sentences), two of which featured either temporally altered or indeterminate word duration patterns. Effects of these manipulations on speech fluency, rate, and temporal entrainment with the choral speech signal were assessed.ResultsAdults who stutter spoke more fluently in all choral speaking conditions than they did when speaking solo. They also spoke slower and exhibited closer temporal entrainment with the choral signal during the mid- to late-stages of sentence production than the adults who did not stutter. Both groups entrained more closely with unaltered choral signals than they did with altered choral signals.ConclusionsFindings suggest that adults who stutter make greater use of speech-related information in choral signals when talking than adults with typical fluency do. The presence of fluency facilitation during temporally altered choral speech and conversation babble, however, suggests that temporal/gestural cueing alone cannot account for fluency facilitation in speakers who stutter. Other potential fluency enhancing mechanisms are discussed.Educational Objectives: The reader will be able to (a) summarize competing views on stuttering as a speech timing disorder, (b) describe the extent to which adults who stutter depend on an accurate rendering of temporal information in order to benefit from choral speech, and (c) discuss possible explanations for fluency facilitation in the presence of inaccurate or indeterminate temporal cues.  相似文献   

12.
Models of both speech perception and speech production typically postulate a processing level that involves some form of phonological processing. There is disagreement, however, on the question of whether there are separate phonological systems for speech input versus speech output. We review a range of neuroscientific data that indicate that input and output phonological systems partially overlap. An important anatomical site of overlap appears to be the left posterior superior temporal gyrus. We then present the results of a new event-related functional magnetic resonance imaging (fMRI) experiment in which participants were asked to listen to and then (covertly) produce speech. In each participant, we found two regions in the left posterior superior temporal gyrus that responded both to the perception and production components of the task, suggesting that there is overlap in the neural systems that participate in phonological aspects of speech perception and speech production. The implications for neural models of verbal working memory are also discussed in connection with our findings.  相似文献   

13.
The second year of life is a time when social communication skills typically develop, but this growth may be slower in toddlers with language delay. In the current study, we examined how brain functional connectivity is related to social communication abilities in a sample of 12-24 month-old toddlers including those with typical development (TD) and those with language delays (LD). We used an a-priori, seed-based approach to identify regions forming a functional network with the left posterior superior temporal cortex (LpSTC), a region associated with language and social communication in older children and adults. Social communication and language abilities were assessed using the Communication and Symbolic Behavior Scales (CSBS) and Mullen Scales of Early Learning. We found a significant association between concurrent CSBS scores and functional connectivity between the LpSTC and the right posterior superior temporal cortex (RpSTC), with greater connectivity between these regions associated with better social communication abilities. However, functional connectivity was not related to rate of change or language outcomes at 36 months of age. These data suggest an early marker of low communication abilities may be decreased connectivity between the left and right pSTC. Future longitudinal studies should test whether this neurobiological feature is predictive of later social or communication impairments.  相似文献   

14.
In this paper, I argue that the intentional structure of typical human conscious experience has “modal breadth”—that the contents of experience typically include alternate possibilities. I support this claim with analyses of conscious mental processes such as the perception of temporally extended events, persistent objects, and causality, and the experience of bodily agency. While modal breadth may not be strictly necessary for consciousness per se, it is essential to many cognitive processes that are pervasive and functionally important to normal human consciousness.  相似文献   

15.
Adults rapidly learn phonotactic constraints from brief production or perception experience. Three experiments asked whether this learning is modality-specific, occurring separately in production and perception, or whether perception transfers to production. Participant pairs took turns repeating syllables in which particular consonants were restricted to particular syllable positions. Speakers’ errors reflected learning of the constraints present in the sequences they produced, regardless of whether their partner produced syllables with the same constraints, or opposing constraints. Although partial transfer could be induced (Experiment 3), simply hearing and encoding syllables produced by others did not affect speech production to the extent that error patterns were altered. Learning of new phonotactic constraints was predominantly restricted to the modality in which those constraints were experienced.  相似文献   

16.
Previous evidence has shown that 11-month-olds represent ordinal relations between purely numerical values, whereas younger infants require a confluence of numerical and non-numerical cues. In this study, we show that when multiple featural cues (i.e., color and shape) are provided, 7-month-olds detect reversals in the ordinal direction of numerical sequences relying solely on number when non-numerical quantitative cues are controlled. These results provide evidence for the influence of featural information and multiple cue integration on infants’ proneness to detect ordinal numerical information.  相似文献   

17.
Attention operates in the space near the hands with unique, action-related priorities. Here, we examined how attention treats objects on the hands themselves. We tested two hypotheses. First, attention may treat stimuli on the hands like stimuli near the hands, as though the surface of the hands were the proximal case of near-hand space. Alternatively, we proposed that the surface of the hands may be attentionally distinct from the surrounding space. Specifically, we predicted that attention should be slow to orient toward the hands in order to remain entrained to near-hand space, where the targets of actions are usually located. In four experiments, we observed delayed orienting of attention on the hands compared to orienting attention near or far from the hands. Similar delayed orienting was also found for tools connected to the body compared to tools disconnected from the body. These results support our second hypothesis: attention operates differently on the functional surfaces of the hand. We suggest this effect serves a functional role in the execution of manual actions.  相似文献   

18.
We examined the relationship between subcomponents of embodiment and multisensory integration using a mirror box illusion. The participants’ left hand was positioned against the mirror, while their right hidden hand was positioned 12″, 6″, or 0″ from the mirror – creating a conflict between visual and proprioceptive estimates of limb position in some conditions. After synchronous tapping, asynchronous tapping, or no movement of both hands, participants gave position estimates for the hidden limb and filled out a brief embodiment questionnaire. We found a relationship between different subcomponents of embodiment and illusory displacement towards the visual estimate. Illusory visual displacement was positively correlated with feelings of deafference in the asynchronous and no movement conditions, whereas it was positive correlated with ratings of visual capture and limb ownership in the synchronous and no movement conditions. These results provide evidence for dissociable contributions of different aspects of embodiment to multisensory integration.  相似文献   

19.
The role of extrastriate cortical areas in selective attention was studied in 12 rhesus monkeys. Animals learned a series of color-form pattern discrimination problems, with either color or form cues relevant. After each problem was mastered, correct behavior required a shift in attention, i.e., that responses be made to be previously irrelevant dimension. On some problems shifting attention required that the animal maintain the same fixation; on other problems the color and form cues were separated in space, and the attention shift presumably required a shift in gaze. Matched groups of animals with inferotemporal, prestriate, or superior temporal sulcus lesions, and normal controls, differed significantly in their ability to shift attention. Analyses of inferred stages in attention shift showed that different processes were disturbed in the three lesion groups. Results are discussed in terms of cortical substrates for "looking" and "seeing".  相似文献   

20.
The prior entry hypothesis contends that attention accelerates sensory processing, shortening the time to perception. Typical observations supporting the hypothesis may be explained equally well by response biases, changes in decision criteria, or sensory facilitation. In a series of experiments conducted to discriminate among the potential mechanisms, observers judged the simultaneity or temporal order of two stimuli, to one of which attention was oriented by exogenous, endogenous, gaze-directed, or multiple exogenous cues. The results suggest that prior entry effects are primarily caused by sensory facilitation and attentional modifications of the decision mechanism, with only a small part possibly due to an attention-dependent sensory acceleration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号