首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

2.
Right hemisphere EEG sensitivity to speech   总被引:3,自引:2,他引:1  
Recent speech perception work with normals and aphasics suggests that the right hemisphere may be more adept than the left at making the voicing discrimination, and the reverse for place of articulation. We examined this right hemisphere voicing effect with natural speech stimuli: stop consonants in pre-, mid-, and postvocalic contexts. Using a neuroelectric event-related potential paradigm, we found numerous effects indicating bilateral components reflecting the voicing and place contrast and unique right hemisphere discrimination of both voicing and place of articulation.  相似文献   

3.
The temporal characteristics of speech can be captured by examining the distributions of the durations of measurable speech components, namely speech segment durations and pause durations. However, several barriers prevent the easy analysis of pause durations: The first problem is that natural speech is noisy, and although recording contrived speech minimizes this problem, it also discards diagnostic information about cognitive processes inherent in the longer pauses associated with natural speech. The second issue concerns setting the distribution threshold, and consists of the problem of appropriately classifying pause segments as either short pauses reflecting articulation or long pauses reflecting cognitive processing, while minimizing the overall classification error rate. This article describes a fully automated system for determining the locations of speech–pause transitions and estimating the temporal parameters of both speech and pause distributions in natural speech. We use the properties of Gaussian mixture models at several stages of the analysis, in order to identify theoretical components of the data distributions, to classify speech components, to compute durations, and to calculate the relevant statistics.  相似文献   

4.
Auditory evoked responses (AER) were recorded from scalp locations over the left and right temporal regions in response to CVC words and nonsense syllables. Various components of the AER were found to vary systematically with changes in stimulus meaning. One such component reflected subcortical involvement in semantic processing. Other components reflected changes in voicing and place of articulation as well as hemisphere differences.  相似文献   

5.
The present acoustic-phonetic study explores whether voicing and devoicing assimilations of French fricatives are equivalent in magnitude and whether they operate similarly (i.e., complete vs. gradient, obligatory vs. optional, regressive vs. progressive). It concurrently assesses the contribution of speakers' articulation rate to the proportion of voicing (i.e., voicing ratios) in /s/ and /z/ embedded in fricative#stop sequences. Data analyses show that voicing and devoicing assimilation are similar in many regards: the absolute amounts of voicing change are equivalent in magnitude (0.77, 0.78) for the two processes: changes in voicing ratios are accompanied by changes in fricative and preceding vowel durations. These concomitant alterations result in the increased acoustic-phonetic similarity between the assimilated and the non-assimilated forms, suggesting that the two processes might be complete. In addition, the two processes operate regressively and across word-boundary. However, data show that the voicing assimilation of /s/ is not rate dependent, which suggest that it might be obligatory, while the devoicing assimilation of /z/ is rate dependent, which suggest that it might be optional.  相似文献   

6.
The letters, numbers, and objects subtests of the Rapid Automatized Naming Tests (RAN) were given to 50 first- and second-grade students. Student performance on the three RAN subtests were audiotaped and subjected to postacquisition processing to distinguish articulation and interarticulation pause times. This study investigated (1) the relations between the articulation and pause durations associated with the 50 stimuli of each RAN subtest and (2) the relations between the pause and articulation latencies of the three RAN subtests and reading. For both first- and second-grade students, pause and articulation times for RAN letters and objects were not found to be reliably related, in contrast to RAN numbers articulation and pause durations. RAN subtest pause durations were differentially related to reading; however, articulation was rarely related to reading. The RAN letters pause time was the most robust predictor of decoding and reading comprehension, consistently predicting all first- and second-grade measures. Analysis supported the view that reading is predicted by speed of processing associated with letters, not general processing speed.  相似文献   

7.
We explored the degree to which the duration of acoustic cues contributes to the respective involvement of the two hemispheres in the perception of speech. To this end, we recorded the reaction time needed to identify monaurally presented natural French plosives with varying VOT values. The results show that a right-ear advantage is significant only when the phonetic boundary is close to the release burst, i.e., when the identification of the two successive acoustical events (the onset of voicing and the release from closure) needed to perceive a phoneme as voiced or voiceless requires rapid information processing. These results are consistent with the recent hypothesis that the left hemisphere is superior in the processing of rapidly changing acoustical information.  相似文献   

8.
In three experiments a series of nonsense syllables ending in consonants was presented to adult subjects who had to discover or learn a rule classifying the syllables into two groups. The rule was based either on the voicing of the final consonants or on an arbitrary division of them. Subjects performed better with the voicing than with the arbitrary rule only when there was a straightforward relationship between the voicing rule and the plural formation rule in English or, more generally, when voicing assimilation with an added consonant was involved and attention was focused on the sound and articulation of the syllables. We conclude that the voicing distinction is not ordinarily accessible and that individuals easily learn and use phonological rules involving voicing assimilation because of articulatory constraints on the production of consonant clusters.  相似文献   

9.
We examine semantic illusions from a dual-process perspective according to which the processes that go into failing or succeeding to detect such illusions can be decomposed into controlled processes (checking the facts in the sentence against the information in memory) and automatic processes (the impression of truth that comes from the semantic associations between the elements in the sentence). These processes, we argue, make largely independent contributions to truth judgments about semantic-illusory sentences. The Process Dissociation Procedure was used to obtain estimates of these two kinds of processes. In Study 1, participants judged whether sentences were true or false while under high or low cognitive load. Cognitive load increased the rate of semantic illusions by specifically affecting controlled processing but not automatic processing. In Study 2, a previous paired-associate learning task also increased the rate of semantic illusions, but it did so by specifically affecting automatic processing, not controlled processing.  相似文献   

10.
Previous work has demonstrated that the graded internal structure of phonetic categories is sensitive to a variety of contextual factors. One such factor is place of articulation: The best exemplars of voiceless stop consonants along auditory bilabial and velar voice onset time (VOT) continua occur over different ranges of VOTs (Volaitis & Miller, 1992). In the present study, we exploited the McGurk effect to examine whether visual information for place of articulation also shifts the best exemplar range for voiceless consonants, following Green and Kuhl's (1989) demonstration of effects of visual place of articulation on the location of voicing boundaries. In Experiment 1, we established that /p/ and /t/ have different best exemplar ranges along auditory bilabial and alveolar VOT continua. We then found, in Experiment 2, a similar shift in the best-exemplar range for /t/ relative to that for /p/ when there was a change in visual place of articulation, with auditory place of articulation held constant. These findings indicate that the perceptual mechanisms that determine internal phonetic category structure are sensitive to visual, as well as to auditory, information.  相似文献   

11.
Several studies have indicated that dyslexics show a deficit in speech perception (SP). The main purpose of this research is to determine the development of SP in dyslexics and normal readers paired by grades from 2nd to 6th grade of primary school and to know whether the phonetic contrasts that are relevant for SP change during development, taking into account the individual differences. The achievement of both groups was compared in the phonetic tasks: voicing contrast, place of articulation contrast and manner of articulation contrast. The results showed that the dyslexic performed poorer than the normal readers in SP. In place of articulation contrast, the developmental pattern is similar in both groups but not in voicing and manner of articulation. Manner of articulation has more influence on SP, and its development is higher than the other contrast tasks in both groups.  相似文献   

12.
In two experiments, we explored the degree to which sentence context effects operate at a lexical or conceptual level by examining the processing of mixed-language sentences by fluent Spanish-English bilinguals. In Experiment 1, subjects’ eye movements were monitored while they read English sentences in which sentence constraint, word frequency, and language of target word were manipulated. A frequency × constraint interaction was found when target words appeared in Spanish, but not in English. First fixation durations were longer for high-frequency Spanish words when these were embedded in high-constraint sentences than in low-constraint sentences. This result suggests that the conceptual restrictions produced by the sentence context were met, but that the lexical restrictions were not. The same result did not occur for low-frequency Spanish words, presumably because the slower access of low-frequency words provided more processing time for the resolution of this conflict. Similar results were found in Experiment 2 using rapid serial visual presentation when subjects named the target words aloud. It appears that sentence context effects are influenced by both semantic/conceptual and lexical information.  相似文献   

13.
Research is reviewed concerning the performance of several neurological groups on the perception and production of voicing contrasts in speech. Patients with cerebellar damage, Parkinson's disease, specific language impairment, Broca's aphasia, apraxia, and Wernicke's aphasia have been reported to be impaired in the perception and articulation of voicing. The types of deficits manifested by these neurologically impaired groups in creating and discriminating voicing contrasts are discussed and the respective contributions of separate neural areas are identified. A model is presented specifying the level of phonemic processing thought to be impaired for each patient group and critical tests of the model's predictions are identified.  相似文献   

14.
Native users of American Sign Language were asked to manipulate sentences in four different ways: sign them at slow rate, parse them, make relatedness judgments of pairs of signs taken from each sentence, and recall the sentences. The data obtained from these four tasks (pause durations, parsing values, indices of relatedness and probe latencies) were used to construct hierarchical performance structures for each of the sentences. The resulting structures were highly similar across tasks; that is, performance structures are not task specific. The four measures at each sign boundary in each sentence were well predicted by a performance model, elaborated by Grosjean, Grosjean, and Lane for speech, that combines a parsing measure with a symmetry measure. Thus performance structures appear to be founded in the processing of language, be it visual or oral, and not in the properties of any particular communication modality.  相似文献   

15.
Three experiments demonstrated that the pattern of changes in articulatory rate in a precursor phrase can affect the perception of voicing in a syllable-initial prestress velar stop consonant. Fast and slow versions of a 10-word precursor phrase were recorded, and sections from each version were combined to produce several precursors with different patterns of change in articulatory rate. Listeners judged the identity of a target syllable, selected from a 7-member /gi/-ki/ voice-onset-time (VOT) continuum, that followed each precursor phrase after a variable brief pause. The major results were: (a) articulatory-rate effects were not restricted to the target syllable's immediate context; (b) rate effects depended on the pattern of rate changes in the precursor and not the amount of fast or slow speech or the proximity of fast or slow speech to the target syllable: and (c) shortening of the pause (or closure) duration led to a shortening of VOT boundaries rather than a lengthening as previously found in this phonetic context. Results are explained in terms of the role of dynamic temporal expectancies in determining the response to temporal information in speech, and implications for theories of extrinsic vs. intrinsic timing are discussed.  相似文献   

16.
Adult subjects attempted to identify structures (words and constituents) in sentences of a language they did not know. They heard each sentence twice-once with a pause interrupting a structural component and once with a pause separating different structural components. They were asked to choose the version that sounded more natural. An experimental group of subjects who had been previously exposed to a spoken passage in the same language as the test sentences was more successful in identifying structures of the sentences than was the control group with previous exposure to another language. This result was interpreted as demonstrating that language structure may be partially acquired during a brief exposure without reliance on meaning. It was also noted that the experimental group identified constituents more accurately than words. This result suggested that constituents, more than words, function as acquisitional units of language.  相似文献   

17.
常欣  王沛 《心理学报》2013,45(7):773-782
选取二语中等熟练者和高熟练者两类中国大学生被试,共40人(27女,13男),年龄为20~29岁,平均年龄23.88岁。采用ERP技术,以直译型英语被动句和意译型英语被动句作为实验材料,通过比较无违例、―句法违例句1‖(动词过去分词形式错误)、―句法违例句2‖(动词过去分词错误用为动词原形而造成的句法违例)以及―句法违例句3‖(动词过去分词误用为动词现在分词形式)条件下的行为数据和ERP多维数据变化的基础上,探讨了二语熟练度和语言间句法结构相似性对汉-英双语者英语被动句句法加工过程的影响。结果表明:高熟练者的反应时和正确率整体上优于中等熟练者;难易度不同的句法错误信息会直接影响被动句的加工,对明显有错误的句法信息的反应速度最快,―正确的局部句法信息‖反应时最长。对明显有错误的句法信息的正确反应率最高,最根本的句法结构的错误信息最低。最根本的句法结构的错误信息引发的P600最大,无违例条件引发的P600最小;高熟练者对最根本的句法结构的错误信息引发最大的P600效应,中等熟练者的P600效应未受不同句法错误信息的影响。行为指标支持句法加工相似性效应——直译句反应快、正确率高;意译句反应慢、正确率低。并且此效应在中等熟练者身上表现更加明显。但是脑神经活动模式未表现出句法结构相似性效应,说明二语熟练度在英语被动句加工中具有更为显著的作用。  相似文献   

18.
The present study was designed to investigate whether accelerated reading rate influences the way adult readers process sentence components with different grammatical functions. Participants were 20 male native Hebrew-speaking college students aged 18-27 years. The processing of normal word strings was examined during word-by-word reading of sentences having subject-verb-object (SVO) syntactic structure in self-paced and fast-paced conditions. In both reading conditions, the N100 (late positive) and P300 (late negative) event-related potential (ERP) components were sensitive to such internal processes as recognition of words' syntactic functions. However, an accelerated reading rate influenced the way in which readers processed these sentence elements. In the self-paced condition, the predicate-centered (morphologically based) strategy was used, whereas in the fast-paced condition an approach that was more like the word-order strategy was used. This new pattern was correlated with findings on the shortening of latency and the increasing of amplitudes in both N100 and P300 ERP components for most sentence elements. These changes seemed to be related to improved working memory functioning and maximized attention.  相似文献   

19.

Two selective adaptation experiments were conducted in order to investigate certain properties of feature detector systems sensitive to the information underlying the voicing distinction (Experiment I) and the place of articulation distinction (Experiment II). The experimental paradigm combined binaural adaptation with a dichotic testing procedure. The stimuli were nonboundary, good exemplars of the respective phonetic categories. In both experiments, there was a systematic shift in performance following adaptation on those trials on which the stimulus in one ear had the adapted feature value and the stimulus in the other ear had the unadapted feature value. On these trials, the adapted feature value was relatively less effective in competing for processing with the unadapted value in the opposing ear (compared to preadaptation performance). Since these results were obtained using nonboundary stimuli, it was argued that (1) adaptation affects the relevant detector along its range of operation or sensitivity, and nof simply at the phonetic boundary, and that (2) information regarding the relative output level of the detector, as well as which detector was more strongly excited, must be available at the site of interaction of the two stimuli.

  相似文献   

20.
Sentence comprehension is a complex task that involves both language-specific processing components and general cognitive resources. Comprehension can be made more difficult by increasing the syntactic complexity or the presentation rate of a sentence, but it is unclear whether the same neural mechanism underlies both of these effects. In the current study, we used event-related functional magnetic resonance imaging (fMRI) to monitor neural activity while participants heard sentences containing a subject-relative or object-relative center-embedded clause presented at three different speech rates. Syntactically complex object-relative sentences activated left inferior frontal cortex across presentation rates, whereas sentences presented at a rapid rate recruited frontal brain regions such as anterior cingulate and premotor cortex, regardless of syntactic complexity. These results suggest that dissociable components of a large-scale neural network support the processing of syntactic complexity and speech presented at a rapid rate during auditory sentence processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号