首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Time is essential to speech. The duration of speech segments plays a critical role in the perceptual identification of these segments, and therefore in that of spoken words. Here, using a French word identification task, we show that vowels are perceived as shorter when attention is divided between two tasks, as compared to a single task control condition. This temporal underestimation pattern is consistent with attentional models of timing and hence demonstrates that vowel duration is explicitly estimated using a central general-purpose timer.  相似文献   

2.
The anatomy of auditory word processing: individual variability   总被引:4,自引:0,他引:4  
This study used functional magnetic resonance imaging (fMRI) to investigate the neural substrate underlying the processing of single words, comparing activation patterns across subjects and within individuals. In a word repetition task, subjects repeated single words aloud with instructions not to move their jaws. In a control condition involving reverse speech, subjects heard a digitally reversed speech token and said aloud the word "crime." The averaged fMRI results showed activation in the left posterior temporal and inferior frontal regions and in the supplementary motor area, similar to previous PET studies. However, the individual subject data revealed variability in the location of the temporal and frontal activation. Although these results support previous imaging studies, demonstrating an averaged localization of auditory word processing in the posterior superior temporal gyrus (STG), they are more consistent with traditional neuropsychological data, which suggest both a typical posterior STG localization and substantial individual variability. By using careful head restraint and movement analysis and correction methods, the present study further demonstrates the feasibility of using overt articulation in fMRI experiments.  相似文献   

3.
The analysis of pure word deafness (PWD) suggests that speech perception, construed as the integration of acoustic information to yield representations that enter into the linguistic computational system, (i) is separable in a modular sense from other aspects of auditory cognition and (ii) is mediated by the posterior superior temporal cortex in both hemispheres. PWD data are consistent with neuropsychological and neuroimaging evidence in a manner that suggests that the speech code is analyzed bilaterally. The typical lateralization associated with language processing is a property of the computational system that acts beyond the analysis of the input signal. The hypothesis of the bilateral mediation of the speech code does not imply that both sides execute the same computation. It is proposed that the speech signal is asymmetrically analyzed in the time domain, with left‐hemisphere mechanisms preferentially extracting information over shorter (25–50 ms) temporal integration windows and right mechanisms over longer (150–250 ms) windows.  相似文献   

4.
Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement.  相似文献   

5.
In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the preceding conversation remark; on the remaining trials the target did not name the speech act that the remark performed. In both experiments, lexical decisions were facilitated for targets representing the speech act performed with the prior utterance, but only when the target was presented to the left visual field (and hence initially processed by the RH) and not when presented to the right visual field. This effect occurred at both short (Experiment 1: 250 ms) and long (Experiment 2: 1000 ms) delays. The results demonstrate the critical role played by the RH in conversation processing.  相似文献   

6.
Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).  相似文献   

7.
Functional magnetic resonance imaging (fMRI) distinguished regions of neural activity associated with active maintenance of semantic and phonological information. Subjects saw a single word for 2 sec, and following a 10-sec delay, made a judgment about that word. In the semantic task, subjects focused on the meaning of the word and decided whether a second word was synonymous with it. In the phonological task, subjects repeated the word silently and decided whether it shared a vowel sound with a nonsense word. Analyses allowed for isolation of neural activity during the maintenance delay. Semantic maintenance elicited greater activity in bilateral inferior frontal gyrus and left middle temporal gyrus regions of interest (ROI). In contrast, there was greater activity for phonological maintenance in the left superior parietal ROI. These results show a frontal-temporal network involved in actively maintaining the meanings of words, and they indicate that semantic and phonological maintenance processes are dissociable within working memory.  相似文献   

8.
This paper describes a novel methodology for the detection of speech patterns. Lagged co-occurrence analysis (LCA) utilizes the likelihood that a target word will be uttered in a certain position after a trigger word. Using this methodology, it is possible to uncover a statistically significant repetitive temporal patterns of word use, compared to a random choice of words. To demonstrate this new tool on autobiographical narratives, 200 subjects related each a 5-min story, and these stories were transcribed and subjected to LCA, using software written by the author. This study focuses on establishing the usefulness of LCA in psychological research by examining its associations with gender. The application of LCA to the corpus of personal narratives revealed significant differences in the temporal patterns of using the word “I” between male and female speakers. This finding is particularly demonstrative of the potential for studying speech temporal patterns using LCA, as men and women tend to utter the pronoun “I” in comparable frequencies. Specifically, LCA of the personal narratives showed that, on average, men tended to have shorter interval between their use of the pronoun, while women speak longer between two subsequent utterances of the pronoun. The results of this study are discussed in light of psycholinguistic factors governing male and female speech communities.  相似文献   

9.
Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.  相似文献   

10.
Reduced neural activation have been consistently observed during repeated items processing, a phenomenon termed repetition suppression. The present study used functional magnetic resonance imaging (fMRI) to investigate whether and how stimuli of emotional valence affects repetition suppression by adopting Chinese personality‐trait words as materials. Seventeen participants were required to read the negative and neutral Chinese personality‐trait words silently. And then they were presented with repeated and novel items during scanning. Results showed significant repetition suppression in the inferior occipital gyrus only for neutral personality‐trait words, whereas similar repetition suppression in the left inferior temporal gyrus and left middle temporal gyrus was revealed for both the word types. These results indicated common and distinct neural substrates during processing Chinese repeated negative and neutral personality‐trait words.  相似文献   

11.
Native speakers of a language are often unable to consciously perceive, and have altered neural responses to, phonemic contrasts not present in their language. This study examined whether speakers of dialects of the same language with different phoneme inventories also show measurably different neural responses to contrasts not present in their dialect. Speakers with (n=11) and without (n=11) an American English I/E (pin/pen) vowel merger in speech production were asked to discriminate perceptually between minimal pairs of words that contrasted in the critical vowel merger and minimal pairs of control words while their event-related potential (ERPs) were recorded. Compared with unmerged dialect speakers, merged dialect speakers were less able to make behavioral discriminations and exhibited a reduced late positive ERP component (LPC) effect to incongruent merger vowel stimuli. These results indicate that between dialects of a single language, the behavioral response differences may reflect neural differences related to conscious phonological decision processes.  相似文献   

12.
Two experiments investigated the mechanism by which listeners adjust their interpretation of accented speech that is similar to a regional dialect of American English. Only a subset of the vowels of English (the front vowels) were shifted during adaptation, which consisted of listening to a 20-min segment of the "Wizard of Oz." Compared to a baseline (unadapted) condition, listeners showed significant adaptation to the accented speech, as indexed by increased word judgments on a lexical decision task. Adaptation also generalized to test words that had not been presented in the accented passage but that contained the shifted vowels. A control experiment showed that the adaptation effect was specific to the direction of the shift in the vowel space and not to a general relaxation of the criterion for what constitutes a good exemplar of the accented vowel category. Taken together, these results provide evidence for a context-specific vowel adaptation mechanism that enables a listener to adjust to the dialect of a particular talker.  相似文献   

13.
Interruption of phonological coding in conduction aphasia   总被引:1,自引:0,他引:1  
A case study of conduction aphasia, investigating single word repetition, phonological coding, and short-term memory, is reported. Evidence from intact adults suggests that repetition can occur through either a lexical route or a direct auditory-articulatory link. For this conduction aphasic, E.A., the direct link was impaired, although the lexical route could be used to produce accurate single word repetition. Several experiments demonstrated a significant impairment in the generation and maintenance of an abstract phonological code. The consequences of a disruption of phonological coding on speech perception and on verbal short-term memory are discussed.  相似文献   

14.
In this paper we examine the evidence for human brain areas dedicated to visual or auditory word form processing by comparing cortical activation for auditory word repetition, reading, picture naming, and environmental sound naming. Both reading and auditory word repetition activated left lateralised regions in the frontal operculum (Broca's area), posterior superior temporal gyrus (Wernicke's area), posterior inferior temporal cortex, and a region in the mid superior temporal sulcus relative to baseline conditions that controlled for sensory input and motor output processing. In addition, auditory word repetition increased activation in a lateral region of the left mid superior temporal gyrus but critically, this area is not specific to auditory word processing, it is also activated in response to environmental sounds. There were no reading specific activations, even in the areas previously claimed as visual word form areas: activations were either common to reading and auditory word repetition or common to reading and picture naming. We conclude that there is no current evidence for cortical sites dedicated to visual or auditory word form processing.  相似文献   

15.
Thalamic stuttering: a distinct clinical entity?   总被引:2,自引:0,他引:2  
A 38-year-old right-handed male with no history of speech or language problems presented with neurogenic stuttering following an ischaemic lesion of the left thalamus. He stuttered severely in propositional speech (conversation, monologue, confrontation naming, and word retrieval) but only slightly in non-propositional speech (automatic speech, sound, word and sentence repetition, and reading aloud). It is suggested that thalamic stuttering may constitute a distinct clinical entity.  相似文献   

16.
张晶  刘昌 《心理科学进展》2013,21(6):1034-1040
押韵是指一对词语中从最后一个发音的元音到词尾的语音结构均相同的现象。现有押韵加工主要分为押韵识别与押韵产生两个研究领域,两者的认知加工过程相似,包括字形编码、形音转换、语音表征与语音分段等阶段。从语音加工与字形加工两方面对押韵过程的神经基础进行探讨,发现左半球颞上回与额下回分别负责语音表征与语音分段,左半球梭状回参与着字形编码,而有效的形音转换依赖于左半球顶下小叶与额下回组成的神经网络。今后应进一步整合不同研究方法与任务下的研究结果,并对押韵产生加工进行更深入的探讨。  相似文献   

17.
Thai, a language which exhibits a phonemic opposition in vowel length, allows us to compare temporal patterns in linguistic and nonlinguistic contexts. Functional MRI data were collected from Thai and English subjects in a speeded-response, selective attention paradigm as they performed same/different judgments of vowel duration and consonants (Thai speech) and hum duration (nonspeech). Activation occurred predominantly in left inferior prefrontal cortex in both speech tasks for the Thai group, but only in the consonant task for the English group. The Thai group exhibited activation in the left mid superior temporal gyrus in both speech tasks; the English group in the posterior superior temporal gyrus bilaterally. In the hum duration task, peak activation was observed bilaterally in prefrontal cortex for both groups. These crosslinguistic data demonstrate that encoding of complex auditory signals is influenced by their functional role in a particular language.  相似文献   

18.
Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in individuals with histories of SSD employing functional MR imaging (fMRI). Participants were six right-handed adolescents with a history of early childhood SSD and seven right-handed matched controls with no history of speech and language disorders. We performed an fMRI study using an overt non-word repetition (NWR). Right lateralized hypoactivation in the inferior frontal gyrus and middle temporal gyrus was observed. The former suggests a deficit in the phonological processing loop supporting PM, while the later may indicate a deficit in speech perception. Both are cognitive processes involved in speech production. Bilateral hyperactivation observed in the pre and supplementary motor cortex, inferior parietal, supramarginal gyrus and cerebellum raised the possibility of compensatory increases in cognitive effort or reliance on the other components of the articulatory rehearsal network and phonologic store. These findings may be interpreted to support the hypothesis that individuals with SSD may have a deficit in PM and to suggest the involvement of compensatory mechanisms to counteract dysfunction of the normal network.  相似文献   

19.
PurposeAdults who stutter speak more fluently during choral speech contexts than they do during solo speech contexts. The underlying mechanisms for this effect remain unclear, however. In this study, we examined the extent to which the choral speech effect depended on presentation of intact temporal speech cues. We also examined whether speakers who stutter followed choral signals more closely than typical speakers did.Method8 adults who stuttered and 8 adults who did not stutter read 60 sentences aloud during a solo speaking condition and three choral speaking conditions (240 total sentences), two of which featured either temporally altered or indeterminate word duration patterns. Effects of these manipulations on speech fluency, rate, and temporal entrainment with the choral speech signal were assessed.ResultsAdults who stutter spoke more fluently in all choral speaking conditions than they did when speaking solo. They also spoke slower and exhibited closer temporal entrainment with the choral signal during the mid- to late-stages of sentence production than the adults who did not stutter. Both groups entrained more closely with unaltered choral signals than they did with altered choral signals.ConclusionsFindings suggest that adults who stutter make greater use of speech-related information in choral signals when talking than adults with typical fluency do. The presence of fluency facilitation during temporally altered choral speech and conversation babble, however, suggests that temporal/gestural cueing alone cannot account for fluency facilitation in speakers who stutter. Other potential fluency enhancing mechanisms are discussed.Educational Objectives: The reader will be able to (a) summarize competing views on stuttering as a speech timing disorder, (b) describe the extent to which adults who stutter depend on an accurate rendering of temporal information in order to benefit from choral speech, and (c) discuss possible explanations for fluency facilitation in the presence of inaccurate or indeterminate temporal cues.  相似文献   

20.
Two lexical decision task (LDT) experiments examined whether visual word recognition involves the use of a speech-like phonological code that may be generated via covert articulation. In Experiment 1, each visual item was presented with an irrelevant spoken word (ISW) that was either phonologically identical, similar, or dissimilar to it. An ISW delayed classification of a visual word when the two were phonologically similar, and it delayed the classification of a pseudoword when it was identical to the base word from which the pseudoword was derived. In Experiment 2, a LDT was performed with and without articulatory suppression, and pseudowords consisted of regular pseudowords and pseudohomophones. Articulatory suppression decreased sound-specific ISW effects for words and regular pseudowords but not for pseudohomophones. These findings indicate that the processing of an orthographically legal letter sequence generally involves the specification of more than one sound code, one of which involves covert articulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号