首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Debates concerning the types of representations that aid reading acquisition have often been influenced by the relationship between measures of early phonological awareness (the ability to process speech sounds) and later reading ability. Here, a complementary approach is explored, analyzing how the functional utility of different representational units, such as whole words, bodies (letters representing the vowel and final consonants of a syllable), and graphemes (letters representing a phoneme) may change as the number of words that can be read gradually increases. Utility is measured by applying a Simplicity Principle to the problem of mapping from print to sound; that is, assuming that the "best" representational units for reading are those which allow the mapping from print to sounds to be encoded as efficiently as possible. Results indicate that when only a small number of words are read whole-word representations are most useful, whereas when many words can be read graphemic representations have the highest utility.  相似文献   

2.
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as “mora” is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.  相似文献   

3.
This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a self-monitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No 'canoe' vs. kaN.sel 'pulpit'; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt's model of phonological encoding.  相似文献   

4.
MacNeilage PF 《The Behavioral and brain sciences》1998,21(4):499-511; discussion 511-46
The species-specific organizational property of speech is a continual mouth open-close alternation, the two phases of which are subject to continual articulatory modulation. The cycle constitutes the syllable, and the open and closed phases are segments-vowels and consonants, respectively. The fact that segmental serial ordering errors in normal adults obey syllable structure constraints suggests that syllabic "frames" and segmental "content" elements are separately controlled in the speech production process. The frames may derive from cycles of mandibular oscillation present in humans from babbling onset, which are responsible for the open-close alternation. These communication-related frames perhaps first evolved when the ingestion-related cyclicities of mandibular oscillation (associated with mastication [chewing] sucking and licking) took on communicative significance as lipsmacks, tonguesmacks, and teeth chatters--displays that are prominent in many nonhuman primates. The new role of Broca's area and its surround in human vocal communication may have derived from its evolutionary history as the main cortical center for the control of ingestive processes. The frame and content components of speech may have subsequently evolved separate realizations within two general purpose primate motor control systems: (1) a motivation-related medial "intrinsic" system, including anterior cingulate cortex and the supplementary motor area, for self-generated behavior, formerly responsible for ancestral vocalization control and now also responsible for frames, and (2) a lateral "extrinsic" system, including Broca's area and surround, and Wernicke's area, specialized for response to external input (and therefore the emergent vocal learning capacity) and more responsible for content.  相似文献   

5.
The article reviews the literature from psychology, phonetics, and phonology bearing on production and perception of syllable timing in speech. A review of the psychological and phonetics literature suggests that production of vowels and consonants are interleaved in syllable sequences in such a way that vowel production is continuous or nearly so. Based on that literature, a hypothesis is developed concerning the perception of syllable timing assuming that vowel production is continuous. The hypothesis is that perceived syllable timing corresponds to the times sequencing of the vowels as produced and not to the timing either of vowel onsets as conventionally measured or of syllable-initial consonants. Three experiments support the hypothesis. One shows that information present during the portion of an acoustic signal in which a syllable-initial consonant predominates is used by listeners to identify the vowel. Compatibly, this information for the vowel contributes to the vowel's perceived duration. Finally, a measure of the perceived timing of a syllable correlates significantly with the time required to identify syllable-medial vowels but not with time to identify the syllable-initial consonants. Further support for the proposed mode of vowel-consonant production and perception is derived from the literature on phonology. Language-specific phonological conventions can be identified that may reflect exaggerations and conventionalizations of the articulatory tendency for vowels to be produced continuously in speech.  相似文献   

6.
Despite many attempts to define the major unit of speech perception, none has been generally accepted. In a unique study, Mermelstein (1978) claimed that consonants and vowels are the appropriate units because a single piece of information (duration, in this case) can be used for one distinction without affecting the other. In a replication, this apparent independence was found, instead, to reflect a lack of statistical power: The vowel and consonant judgments did interact. In another experiment, interdependence of two phonetic judgments was found in responses based on the fricative noise and the vocalic formants of a fricative-vowel syllable. These results show that each judgment made on speech signals must take into account other judgments that compete for information in the same signal. An account is proposed that takes segments as the primary units, with syllables imposing constraints on the shape they may take.  相似文献   

7.
In five experiments with synthetic and natural speech syllables, a rating task we used to study the effects of differences in vowels, consonants, and segment order on judged syllable similarity. The results of Experiments I-IV support neither a purely phonemic model of speech representation, in which vowel, consonant, and order are represented independently, nor a purely syllabic model, in which the three factors are integrated. Instead, the data indicate that subjects compare representations in which adjacent vowel and consonant are independent of one another but are not independent of their positions in the syllable. Experiment V provided no support for the hypothesis that this position-sensitive coding is due to acoustic differences in formant transitions.  相似文献   

8.
Adults rapidly learn phonotactic constraints from brief production or perception experience. Three experiments asked whether this learning is modality-specific, occurring separately in production and perception, or whether perception transfers to production. Participant pairs took turns repeating syllables in which particular consonants were restricted to particular syllable positions. Speakers’ errors reflected learning of the constraints present in the sequences they produced, regardless of whether their partner produced syllables with the same constraints, or opposing constraints. Although partial transfer could be induced (Experiment 3), simply hearing and encoding syllables produced by others did not affect speech production to the extent that error patterns were altered. Learning of new phonotactic constraints was predominantly restricted to the modality in which those constraints were experienced.  相似文献   

9.
The ability to form perceptual equivalence classes from variable input stimuli is common in both animals and humans. Neural circuitry that can disambiguate ambiguous stimuli to arrive at perceptual constancy has been documented in the barn owl's inferior colliculus where sound-source azimuth is signaled by interaural phase differences spanning the frequency spectrum of the sound wave. Extrapolating from the sound-localization system of the barn owl to human speech, 2 hypothetical models are offered to conceptualize the neural realization of relative invariance in (a) categorization of stop consonants/b, d, g/ across varying vowel contexts and (b) vowel identity across speakers. 2 computational algorithms employing real speech data were used to establish acoustic commonalities to form neural mappings representing phonemic equivalence classes in the form of functional arrays similar to those seen in the barn owl.  相似文献   

10.
Categorical effects are found across speech sound categories, with the degree of these effects ranging from extremely strong categorical perception in consonants to nearly continuous perception in vowels. We show that both strong and weak categorical effects can be captured by a unified model. We treat speech perception as a statistical inference problem, assuming that listeners use their knowledge of categories as well as the acoustics of the signal to infer the intended productions of the speaker. Simulations show that the model provides close fits to empirical data, unifying past findings of categorical effects in consonants and vowels and capturing differences in the degree of categorical effects through a single parameter.  相似文献   

11.
Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment "words" from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.  相似文献   

12.
A Cantonese syllable-spotting experiment was conducted to examine whether the Possible-Word Constraint (PWC), proposed by Norris, McQueen, Cutler, and Butterfield (1997), can apply in Cantonese speech segmentation. In the experiment, listeners were asked to spot out the target Cantonese syllable from a series of nonsense sound strings. Results suggested that listeners found it more difficult to spot out the target syllable [kDm1] in the nonsense sound strings that attached with a single consonant [tkDm1] than in the nonsense sound strings that attached either with a vowel [a:kDm1] or a pseudo-syllable [khow1kDm1]. Finally, the current set of results further supported that the PWC appears to be a language-universal mechanism in segmenting continuous speech.  相似文献   

13.
describe two aphasic patients, with impaired processing of vowels and consonants, respectively. The impairments could not be captured according to the sonority hierarchy or in terms of a feature level analysis. Caramazza et al. claim that this dissociation demonstrates separate representation of the categories of vowels and consonants in speech processing. We present two connectionist models of the management of phonological representations. The models spontaneously develop separable processing of vowels and consonants. The models have two hidden layers and are given as input vowels and consonants represented in terms of their phonological distinctive features. The first model is presented with feature bundles one at a time and the hidden layers have to combine their output to reproduce a unified copy of the feature bundle. In the second model a "fine-coded" layer receives information about feature bundles in isolation, and a "coarse-coded" layer receives information about each feature bundle in the context of the prior and subsequent feature bundle. Coarse-coding facilitated processing of vowels and fine-coding processing of consonants. These models show that separable processing of vowels and consonants is an emergent effect of modular processors operating on feature-based representations. We argue that it is not necessary to postulate an independent level of representation for the consonant/vowel distinction, separate from phonological distinctive features.  相似文献   

14.
American English liquids /r/ and /l/ have been considered intermediate between stop consonants and vowels acoustically, articulatorily, phonologically, and perceptually. Cutting (1947a) found position-dependent ear advantages for liquids in a dichotic listening task: syllable-initial liquids produced significant right ear advantages, while syllable-final liquids produced no reliable ear advantages. The present study employed identification and discrimination tasks to determine whether /r/and /l/ are perceived differently depending on syllable position when perception is tested by a different method. Fifteen subjects listened to two synthetically produced speech series—/li/ to /ri/ and /il/ to /ir/—in which stepwise variations of the third formant cued the difference in consonant identity. The results indicated that: (1) perception did not differ between syllable positions (in contrast to the dichotic listening results), (2) liquids in both syllable positions were perceived categorically, and (3) discrimination of a nonspeech control series did not account for the perception of the speech sounds.  相似文献   

15.
Over the past couple of decades, research has established that infants are sensitive to the predominant stress pattern of their native language. However, the degree to which the stress pattern shapes infants' language development has yet to be fully determined. Whether stress is merely a cue to help organize the patterns of speech or whether it is an important part of the representation of speech sound sequences has still to be explored. Building on research in the areas of infant speech perception and segmentation, we asked how several months of exposure to the target language shapes infants' speech processing biases with respect to lexical stress. We hypothesize that infants represent stressed and unstressed syllables differently, and employed analyses of child-directed speech to show how this change to the representational landscape results in better distribution-based word segmentation as well as an advantage for stress-initial syllable sequences. A series of experiments then tested 9- and 7-month-old infants on their ability to use lexical stress without any other cues present to parse sequences from an artificial language. We found that infants adopted a stress-initial syllable strategy and that they appear to encode stress information as part of their proto-lexical representations. Together, the results of these studies suggest that stress information in the ambient language not only shapes how statistics are calculated over the speech input, but that it is also encoded in the representations of parsed speech sequences.  相似文献   

16.
A detailed analysis of a unique speech disturbance, marked by the frequent appearance in the speech stream of a meaningless intrusive syllable, is presented. Following a lengthy thoracic surgery, an American English speaking patient began to speak with non-English prosodic patterns, which evolved to a conspicuous intrusion in his speech of the syllable /sis/. This syllable and its variants were attached to words in a manner which conformed to the regular phonological rules in English (for formation of plural, possessive, and third person singular morphemes). The distribution and frequency of the intrusive syllable are described, and possible explanations for the abnormal occurrence of this particular syllable are discussed.  相似文献   

17.
Functional imaging studies have delineated a "minimal network for overt speech production", encompassing mesiofrontal structures (supplementary motor area, anterior cingulate gyrus), bilateral pre- and postcentral convolutions, extending rostrally into posterior parts of the inferior frontal gyrus (IFG) of the language-dominant hemisphere, left anterior insula as well as bilateral components of the basal ganglia, the cerebellum, and the thalamus. In order to further elucidate the specific contribution of these cerebral regions to speech motor planning, subjects were asked to read aloud visually presented bisyllabic pseudowords during functional magnetic resonance imaging (fMRI). The test stimuli systematically varied in onset complexity (CCV versus CV) and frequency of occurrence (high-frequency, HF versus low-frequency, LF) of the initial syllable. A cognitive subtraction approach revealed a significant main effect of syllable onset complexity (CCV versus CV) at the level of left posterior IFG, left anterior insula, and both cerebellar hemispheres. Conceivably, these areas closely cooperate in the sequencing of subsyllabic aspects of the sound structure of verbal utterances. A significant main effect of syllable frequency (LF versus HF), by contrast, did not emerge. However, calculation of the time series of hemodynamic activation within the various cerebral structures engaged in speech motor control revealed this factor to enhance functional connectivity between Broca's area and ipsilateral anterior insula.  相似文献   

18.
A patient with a rather pure word deafness showed extreme suppression of right ear signals under dichotic conditions, suggesting that speech signals were being processed in the right hemisphere. Systematic errors in the identification and discrimination of natural and synthetic stop consonants further indicated that speech sounds were not being processed in the normal manner. Auditory comprehension improved considerably however, when the range of speech stimuli was limited by contextual constraints. Possible implications for the mechanism of word deafness are discussed.  相似文献   

19.
20.
Eleven-month-olds can recognize a few auditorily presented familiar words in experimental situations where no hints are given by the intonation, the situation, or the presence of possible visual referents. That is, infants of this age (and possibly somewhat younger) can recognize words based on sound patterns alone. The issue addressed in this article is what is the type of mental representations infants use to code words they recognize. The results of a series of experiments with French-learning infants indicate that word representations in 11-month-olds are segmentally underspecified and suggest that they are all the more underspecified when infants engage in recognizing words rather than merely attending to meaningless speech sounds. But underspecification has limits, which were explored here with respect to word-initial consonants. The last two experiments show the way to investigating further these limits for word-initial consonants as well as for segments in other word positions. In French, infants' word representations are flexible enough to allow for structural changes in the voicing or even in the manner of articulation of word-initial consonants. Word-initial consonants must be present, however, for words to be recognized. In conclusion, a parallel is proposed between the emerging capacities to ignore variations that are irrelevant for word recognition in a “lexical mode” and to ignore variations that are phonemically irrelevant in a “neutral mode” of listening to native speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号