首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
For native speakers of English and several other languages, preceding vocalic duration andFi offset frequency are two of the cues that convey the stop consonant voicing distinction in wordfinal position. For speakers learning English as a second language, there are indications that use of vocalic duration, but notFl offset frequency, may be hindered by a lack of experience with phonemic (i.e., lexical) vowel length (the “phonemic vowel length account”: Crowther & Mann, 1992). In this study, native speakers of Arabic, a language that includes a phonemic vowel length distinction, were tested for their use of vocalic duration andF1 offset in production and perception of the English consonant-vowel-consonant forms pod and pot. The phonemic vowel length hypothesis predicts that Arabic speakers should use vocalic duration extensively in production and perception. On the contrary, experiment l repealed that, consistent with Flege and Port’s (1981) findings, they produced only slightly (but significantly) longer vocalic segments in their pod tokens. It further indicated that their productions showed a significant variation inFl offset as a function of final stop voicing. Perceptual sensitivity to vocalic duration andFl offset as voicing cues was tested in two experiments. In experiment 2, we employed a factorial combination of these two cues and a finely spaced vocalic duration continuum. Arabic speakers did not appear to be very sensitive to vocalic duration, but they were abort as sensitive as native English speakers toF1 offset frequency. In Experiment 3, we employed a one-dimensional continuum of more widely spaced stimuli that varied only vocalic duration. Arabic speakers showed native-English-like sensitivity to vocalic duration- Anexplanation based on tie perceptual anchor theory of context coding (Braida et al., 1984; Macmillan, 1987; Macmillan, Braida, & Goldberg, 1987) and phoneme perception theory (Schouten & Van Hessen, 2992) is offered to reconcile the apparently contradictory perceptual findings. The explanation does not attribute native-English-like voicing perception to the Ambit subjects. The findings in this study call fox a modification of the phonemic vowel length hypothesis.  相似文献   

2.
The processing of consonants was investigated in a series of experiments using a recognition masking paradigm. Experiment I investigated the effects of target duration, interstimulus interval, forward vs. backward masking, and the phonetic feature composition of the target and mask on accuracy of target identification. Experiment II assessed consonant processing when the target and mask were presented dichotically in order to separate central and peripheral components of consonant masking. Experiment III investigated the effects of mask duration on consonant processing. Substantial masking was found in backward and forward diotic and dichotic conditions. Evidence for target-mask interaction at the level of phonetic features was also found.  相似文献   

3.
In three experiments, we determined how perception of the syllable-initial distinction between the stop consonant [b] and the semivowel [w], when cued by duration of formant transitions, is affected by parts of the sound pattern that occur later in time. For the first experiment, we constructed four series of syllables, similar in that each had initial formant transitions ranging from one short enough for [ba] to one long enough for [wa], hut different in overall syllable duration. The consequence in perception was that, as syllable duration increased, the [b-w] boundary moved toward transitions of longer duration. Then, in the second experiment, we increased the duration of the sound by adding a second syllable, [da], (thus creating [bada-wada]), and observed that lengthening the second syllable also shifted the perceived [b-w] boundary in the first syllable toward transitions of longer duration; however, this effect was small by comparison with that produced when the first syllable was lengthened equivalently. In the third experiment, we found that altering the structure of the syllable had an effect that is not to be accounted for by the concomitant change in syllable duration: lengthening the syllable by adding syllable-final transitions appropriate for the stop consonant [d] (thus creating [bad-wad]) caused the perceived [b-w] boundary to shift toward transitions of shorter duration, an effect precisely opposite to that produced when the syllable was lengthened to the same extent by adding steady-state vowel. We suggest that, in all these cases, the later-occurring information specifies rate of articulation and that the effect on the earlier-occurring cue reflects an appropriate perceptual normalization.  相似文献   

4.
The processing of letter-position information in randomly arranged consonant strings was investigated using a masked prime variant of the alphabetic decision (letter/nonletter classification) task. In Experiment 1, primes were uppercase consonant trigrams (e.g., FMH) and targets were two uppercase Xs accompanied by the target letter or a nonletter (e.g., XMX, X%X). Response times were systematically faster when target letters were present in the prime string than when target letters were not present in the prime string. These constituent letter-priming effects were significantly stronger when the target letter appeared in the same position in the prime and target stimuli. This contrast between position-specific and position-independent priming was accentuated when subjects responded only when all the characters in the target string were letters (multiple alphabetic decision) in Experiments 2 and 3. In Experiment 4, when prime exposure duration was varied, it was found that position-specific priming develops earlier than position-independent priming. Finally, Experiment 5 ruled out a perceptual-matching interpretation of these results. An interpretation is offered in terms of position-specific and position-independent letter-detector units in an interactiveactivation framework.  相似文献   

5.

Consonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203–230, 2003). Several factors modulate infants’ development of the consonant bias, including fine-grained temporal processing ability and native language exposure [for review, see Nazzi et al. (Curr Direct Psychol Sci 25:291–296, 2016)]. A rat model demonstrated that mature fine-grained temporal processing alone cannot account for consonant bias emergence; linguistic exposure is also necessary Bouchon and Toro (An Cog 22:839–850, 2019). This study tested domestic dogs, who have similarly fine-grained temporal processing but more language exposure than rats, to assess whether a minimal lexicon and small degree of regular linguistic exposure can allow for consonant bias development. Dogs demonstrated a vowel bias rather than a consonant bias, preferring their own name over a vowel-mispronounced version of their name, but not in comparison to a consonant-mispronounced version. This is the pattern seen in young infants Bouchon et al. (Dev Sci 18:587–598, 2015) and rats Bouchon et al. (An Cog 22:839–850, 2019). In a follow-up study, dogs treated a consonant-mispronounced version of their name similarly to their actual name, further suggesting that dogs do not treat consonant differences as meaningful for word identity. These results support the findings from Bouchon and Toro (An Cog 2:839–850, 2019), suggesting that there may be a default preference for vowel information over consonant information when identifying word forms, and that the consonant bias may be a human-exclusive tool for language learning.

  相似文献   

6.
The perception of consonant clusters that are phonotactically illegal word initially in English (e.g., /tl/, /sr/) was investigated to determine whether listeners’ phonological knowledge of the language influences speech processing. Experiment 1 examined whether the phonotactic context effect (Massaro & Cohen, 1983), a bias toward hearing illegal sequences (e.g., /tl/) as legal (e.g., /tr/), is more likely due to knowledge of the legal phoneme combinations in English or to a frequency effect. In Experiment 2, Experiment 1 was repeated with the clusters occurring word medially to assess whether phonotactic rules of syllabification modulate the phonotactic effect. Experiment 3 examined whether vowel epenthesis, another phonological process, might also affect listeners’ perception of illegal sequences as legal by biasing them to hear a vowel between the consonants of the cluster (e.g., /talee/). Results suggest that knowledge of the phonotactically permissible sequences in English can affect phoneme processing in multiple ways.  相似文献   

7.
When listening to speech, do we recognize syllables or phonemes? Information concerning the organization of the decisions involved in identifying a syllable may be elicited by allowing separate phonetic decisions regarding the vowel and consonant constituents to be controlled by the same acoustic information and by looking for evidence of interaction between these decisions. The duration and first formant frequency of the steady-state vocalic segment in synthesized consonant-vowel-consonant syllables were varied to result in responses of /bεd/, /bæd/, /bεt/, and /bæt/. The fact that the duration of the steady-state segment controls both decisions implies that that segment must be included in its entirety in the signal intervals on which the two decisions are based. For most subjects, no further significant interaction between the vocalic and consonantal decision is found beyond the fact that they are both affected by changes in the duration parameter. A model of two separate and independent phonetic decisions based on overlapping ranges of the signal adequately accounts for these data, and no explicit syllable level recognition needs to be introduced.  相似文献   

8.
Auditory evoked responses (AER) were recorded from frontal, temporal, and parietal scalp regions to a series of consonant-vowel syllables which varied in the duration of the consonant transition. Multivariate analyses of the AER waveforms identified one component of the AERs occurring only over right hemisphere regions which discriminated between differences in transition durations. A second component detected over only left hemisphere areas discriminated differences in place of articulation. These data are consistent with previous behavioral and electrophysiological reports that the right hemisphere is sensitive to temporal discriminations.  相似文献   

9.
This study explored a number of temporal (durational) parameters of consonant and vowel production in order to determine whether the speech production impairments of aphasics are the result of the same or different underlying mechanisms and in particular whether they implicate deficits that are primarily phonetic or phonological in nature. Detailed analyses of CT scan lesion data were also conducted to explore whether more specific neuroanatomical correlations could be made with speech production deficits. A series of acoustic analyses were conducted including voice-onset time, intrinsic and contrastive fricative duration, and intrinsic and contrastive vowel duration as produced by Broca's aphasics with anterior lesions (A patients), nonfluent aphasics with anterior and posterior lesions (AP patients), and fluent aphasics with posterior lesions (P patients). The constellation of impairments for the anterior aphasics including both the A and AP patients suggests that their disorder primarily reflects an inability to implement particular types of articulatory gestures or articulatory parameters rather than an inability to implement particular phonetic features. They display impairments in the implementation of laryngeal gestures for both consonant and vowel production. These patterns seem to relate to particular anatomical sites involving Broca's area, the anterior limb of the internal capsule, and the lowest motor cortex areas for larynx and tongue. The posterior patients also show evidence of subtle phonetic impairments suggesting that the neural instantiation of speech may require more extensive involvement, including the perisylvian area, than previously suggested.  相似文献   

10.
The question of whether preference for consonance is rooted in acoustic properties important to the auditory system or is acquired through enculturation has not yet been resolved. Two-month-old infants prefer consonant over dissonant intervals, but it is possible that this preference is rapidly acquired through exposure to music soon after birth or in utero. Controlled-rearing studies with animals can help shed light on this question because such studies allow researchers to distinguish between biological predispositions and learned preferences. In the research reported here, we found that newly hatched domestic chicks show a spontaneous preference for a visual imprinting object associated with consonant sound intervals over an identical object associated with dissonant sound intervals. We propose that preference for harmonic relationships between frequency components may be related to the prominence of harmonic spectra in biological sounds in natural environments.  相似文献   

11.
Despite many attempts to define the major unit of speech perception, none has been generally accepted. In a unique study, Mermelstein (1978) claimed that consonants and vowels are the appropriate units because a single piece of information (duration, in this case) can be used for one distinction without affecting the other. In a replication, this apparent independence was found, instead, to reflect a lack of statistical power: The vowel and consonant judgments did interact. In another experiment, interdependence of two phonetic judgments was found in responses based on the fricative noise and the vocalic formants of a fricative-vowel syllable. These results show that each judgment made on speech signals must take into account other judgments that compete for information in the same signal. An account is proposed that takes segments as the primary units, with syllables imposing constraints on the shape they may take.  相似文献   

12.
This study was designed to investigate if persons who stutter differ from persons who do not stutter in the coproduction of different types of consonant clusters, as measured in the number of dysfluencies and incorrect speech productions, in speech reaction times and in word durations. Based on the Gestural Phonology Model of Browman and Goldstein, two types of consonant clusters were formed: homorganic and heterorganic clusters, both intra-syllabic (CVCC) and inter-syllabic (CVC#CVC). Overall, the results indicated that homorganic clusters elicited more incorrect speech productions and longer reaction times than the heterorganic clusters, but there was no difference between the homorganic and the heterorganic clusters in the word duration data. Persons who stutter showed a higher percentage dysfluencies and a higher percentage incorrect speech productions than PWNS but there were no main group effects in reaction times and word durations. However, there was a significant three-way interaction effect between group, cluster type and cluster place: homorganic clusters elicited longer reaction times than heterorganic clusters, but only in the inter-syllabic condition and only for persons who stutter. These results suggest that the production of two consonants with the same place of articulation across a syllable boundary puts higher demands on motor planning and/or initiation than producing the same cluster at the end of a syllable, in particular for PWS. The findings are discussed in light of current theories on speech motor control in stuttering. EDUCATIONAL OBJECTIVES: The reader will be able to describe: (1) the effect of gestural overlap between consonant clusters on speech reaction time and word duration of people who do and do not stutter and be able to (2) identify the literature in the field of gestural overlap between consonant clusters.  相似文献   

13.
The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can or cannot be categorized using the Russian phonological system. The results showed that the duration-change MMN for the Finnish sounds that could be categorized through Russian was reduced in comparison with that for the Finnish sounds having no Russian equivalent. In the Finnish sounds that can be mapped through the Russian phonological system, the facilitation of the duration processing may be inhibited by the native Russian language. However, for the sounds that have no Russian equivalent, new vowel categories independent of the native Russian language have apparently been established, enabling a native-like duration processing of Finnish.  相似文献   

14.
The speech behaviour of 11 patients with exclusively or almost exclusively phonological disorders (2 anarthical patients and 9 with aphasis of the Broca type) is being tested with an one word repetition procedure. This paper only demonstrates the analysis of the consonant substitutions and of the consonant variables under the impact of substitution. The results can be summarized as follows: 1. There is a negative correlation between the amount with which consonant phonemes are substituted and the amount of their use in normal subjects. 2. The group of phonemes which is acquired latest in speech development of the child is also influenced mostly by substitutions. 3. In the majority of substitutions we find a differentiation in one or more variables of the substituent and the substitute. 4. The variables which are influenced by substitutions show an internal hierarchy in the system of language.  相似文献   

15.
The present study investigated the articulatory implementation deficits of Broca's and Wernicke's aphasics and their potential neuroanatomical correlates. Five Broca's aphasics, two Wernicke's aphasics, and four age-matched normal speakers produced consonant-vowel-(consonant) real word tokens consisting of [m, n] followed by [i, e, a, o, u]. Three acoustic measures were analyzed corresponding to different properties of articulatory implementation: murmur duration (a measure of timing), amplitude of the first harmonic at consonantal release (a measure of articulatory coordination), and murmur amplitude over time (a measure of laryngeal control). Results showed that Broca's aphasics displayed impairments in all of these parameters, whereas Wernicke's aphasics only exhibited greater variability in the production of two of the parameters. The lesion extent data showed that damage in either Broca's area or the insula cortex was not predictive of the severity of the speech output impairment. Instead, lesions in the upper and lower motor face areas and the supplementary motor area resulted in the most severe implementation impairments. For the Wernicke's aphasics, the posterior areas (superior marginal gyrus, parietal, and sensory) appear to be involved in the retrieval and encoding of lexical forms for speech production, resulting in increased variability in speech production.  相似文献   

16.
Thai, a language which exhibits a phonemic opposition in vowel length, allows us to compare temporal patterns in linguistic and nonlinguistic contexts. Functional MRI data were collected from Thai and English subjects in a speeded-response, selective attention paradigm as they performed same/different judgments of vowel duration and consonants (Thai speech) and hum duration (nonspeech). Activation occurred predominantly in left inferior prefrontal cortex in both speech tasks for the Thai group, but only in the consonant task for the English group. The Thai group exhibited activation in the left mid superior temporal gyrus in both speech tasks; the English group in the posterior superior temporal gyrus bilaterally. In the hum duration task, peak activation was observed bilaterally in prefrontal cortex for both groups. These crosslinguistic data demonstrate that encoding of complex auditory signals is influenced by their functional role in a particular language.  相似文献   

17.
18.
Although speechreading can be facilitated by auditory or tactile supplements, the process that integrates cues across modalities is not well understood. This paper describes two “optimal processing” models for the types of integration that can be used in speechreading consonant segments and compares their predictions with those of the Fuzzy Logical Model of Perception (FLMP, Massaro, 1987). In “pre-labelling” integration, continuous sensory data is combined across modalities before response labels are assigned. In “post-labelling” integration, the responses that would be made under unimodal conditions are combined, and a joint response is derived from the pair. To describe pre-labelling integration, confusion matrices are characterized by a multidimensional decision model that allows performance to be described by a subject's sensitivity and bias in using continuous-valued cues. The cue space is characterized by the locations of stimulus and response centres. The distance between a pair of stimulus centres determines how well two stimuli can be distinguished in a given experiment. In the multimodal case, the cue space is assumed to be the product space of the cue spaces corresponding to the stimulation modes. Measurements of multimodal accuracy in five modern studies of consonant identification are more consistent with the predictions of the pre-labelling integration model than the FLMP or the post-labelling model.  相似文献   

19.
A series of three experiments was performed in a classroom setting with small groups of young children with severe articulation problems. Variations on a basic token reinforcement procedure were demonstrated in each experiment. A combined multiple baseline/reversal design showed effective experimenter control of rates of correct and incorrect consonant sound articulation in all cases. In addition, the data in each experiment showed the problems of obtaining stimulus generalization of the high rates of correct articulation to non-training settings. The third experiment demonstrated a procedure for producing such generalization by making each child a discriminative stimulus for correct articulaton by the other child, thus maintaining high levels of correct articulation for each child when in the presence of the other.  相似文献   

20.
于文勃  梁丹丹 《心理科学进展》2018,26(10):1765-1774
词是语言的基本结构单位, 对词语进行切分是语言加工的重要步骤。口语语流中的切分线索来自于语音、语义和语法三个方面。语音线索包括概率信息、音位配列规则和韵律信息, 韵律信息中还包括词重音、时长和音高等内容, 这些线索的使用在接触语言的早期阶段就逐渐被个体所掌握, 而且在不同的语言背景下有一定的特异性。语法和语义线索属于较高级的线索机制, 主要作用于词语切分过程的后期。后续研究应从语言的毕生发展和语言的特异性两个方面考察口语语言加工中的词语切分线索。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号