首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Perceptual discrimination between speech sounds belonging to different phoneme categories is better than that between sounds falling within the same category. This property, known as "categorical perception," is weaker in children affected by dyslexia. Categorical perception develops from the predispositions of newborns for discriminating all potential phoneme categories in the world's languages. Predispositions that are not relevant for phoneme perception in the ambient language are usually deactivated during early childhood. However, the current study shows that dyslexic children maintain a higher sensitivity to phonemic distinctions irrelevant in their linguistic environment. This suggests that dyslexic children use an allophonic mode of speech perception that, although without straightforward consequences for oral communication, has obvious implications for the acquisition of alphabetic writing. Allophonic perception specifically affects the mapping between graphemes and phonemes, contrary to other manifestations of dyslexia, and may be a core deficit.  相似文献   

2.
In two experiments, we investigated the factors that influence the perceived similarity of speech sounds at two developmental levels. Kindergartners and second graders were asked to classify nonsense words, which were related by syllable and phoneme correspondences. The results support the existence of a developmental trend toward increased attention to individual phonemic segments. Moreover, one significant factor in determining the perceived similarity of speech sounds appears to be the position of the component correspondences; attention to the beginning of utterances may have developmental priority. An unexpected finding was that the linguistic status of the unit involved in a correspondence (whether it was a syllable or a phoneme) did not seem particularly important. Apparently, the factors which contribute to the perceived similarity of speech sounds in the classification task are not identical to those which underlie performance in explicit segmentation and manipulation tasks, since in the latter sort of task, syllables are more accessible than phonemes for young children. The present task may tap a level of processing that is closer to the one entailed in word recognition and lexical access.  相似文献   

3.
During much of the past century, it was widely believed that phonemes—the human speech sounds that constitute words—have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature—namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts.  相似文献   

4.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

5.
Oppenheim GM  Dell GS 《Cognition》2008,106(1):528-537
Inner speech, that little voice that people often hear inside their heads while thinking, is a form of mental imagery. The properties of inner speech errors can be used to investigate the nature of inner speech, just as overt slips are informative about overt speech production. Overt slips tend to create words (lexical bias) and involve similar exchanging phonemes (phonemic similarity effect). We examined these effects in inner and overt speech via a tongue-twister recitation task. While lexical bias was present in both inner and overt speech errors, the phonemic similarity effect was evident only for overt errors, producing a significant overtness by similarity interaction. We propose that inner speech is impoverished at lower (featural) levels, but robust at higher (phonemic) levels.  相似文献   

6.
This study tested the hypothesis that the sonority of phonemes (a sound's relative loudness compared to other sounds with the same length, stress, and pitch) influences children's segmentation of syllable constituents. Two groups of children, first graders and preschoolers, were assessed for their awareness of phonemes in coda and onset positions, respectively, using different phoneme segmentation tasks. Although the trends for the first graders were more robust than the trends for the preschoolers, phoneme segmentation in the two groups correlated with the sonority levels of phonemes, regardless of phoneme position or task. These results, consistent with prior studies of adults, suggest that perceptual properties, such as sonority levels, greatly influence the development of phoneme awareness.  相似文献   

7.
Lexical context strongly influences listeners’ identification of ambiguous sounds. For example, a sound midway between /f/ and /s/ is reported as /f/ in “sheri_’” but as /s/ in “Pari_.” Norris, McQueen, and Cutler (2003) have demonstrated that after hearing such lexically determined phonemes, listeners expand their phonemic categories to include more ambiguous tokens than before. We tested whether listeners adjust their phonemic categories for a specific speaker: Do listeners learn a particular speaker’s “accent”? Similarly, we examined whether perceptual learning is specific to the particular ambiguous phonemes that listeners hear, or whether the adjustments generalize to related sounds. Participants heard ambiguous /d/ or /t/ phonemes during a lexical decision task. They then categorized sounds on /d/-/t/ and /b/-/p/ continua, either in the same voice that they had heard for lexical decision, or in a different voice. Perceptual learning generalized across both speaker and test continua: Changes in perceptual representations are robust and broadly tuned.  相似文献   

8.
A series of studies was undertaken to examine how rate normalization in speech perception would be influenced by the similarity, duration, and phonotactics of phonemes that were adjacent or distal from the initial, target phoneme. The duration of the adjacent (following) phoneme always had an effect on perception of the initial target. Neither phonotactics nor acoustic similarity seemed to have any influence on this rate normalization effect. However, effects of the duration of the nonadjacent (distal) phoneme were only found when that phoneme was temporally close to the target. These results suggest that there is a temporal window over which rate normalization occurs. In most cases, only the adjacent phoneme or adjacent two phonemes will fall within this window and thus influence perception of a phoneme distinction.  相似文献   

9.
We explore the recent finding (Newman & Dell, 1978) that the time needed to detect a target phoneme in a phoneme monitoring task is increased when the preceding word contains a phoneme similar to the target. Normal adult native speakers of English monitored auditorily presented sentences and responded as quickly as possible whenever they detected a specified phoneme. We found that preceding word-initial phonemes, despite being processed more quickly, increased the response latency to the following target phoneme more than did preceding word-medial phonemes. There was also an increase in response latency even when the subject could be highly certain that the similar preceding phoneme was not an instance of the target phoneme. We argue that the interference effects are due to fundamental characteristics of perceptual processing and that more time is needed to categorize the target phoneme. We present a computer simulation using an interactive activation model of speech perception to demonstrate the plausibility of our explanation.  相似文献   

10.
Auditory perception of speech and speech sounds was examined in three groups of patients with cerebral damage in the dominant hemisphere. Two groups consisted of brain-injured war veterans, one group of patients with high-frequency hearing loss and the other, a group of patients with a flat hearing loss. The third group consisted of patients with recent cerebral infarcts due to vascular occlusion of the middle cerebral and internal carotid artery. Word and phoneme discrimination as well as phoneme confusions in incorrect responses were analyzed from conventional speech audiometry tests with bisyllabic Finnish words fed close to the speech reception threshold of the patient. The results were compared with those of a control group with no cerebral disorders and normal hearing. The speech discrimination scores of veterans with high-frequency hearing loss and patients with recent cerebral infarcts were some 15–20% lower than those of controls or veterans with flat hearing loss. Speech sound feature discrimination, analyzed in terms of place of articulation and distinctive features, was distorted especially in cases of recent cerebral infarcts, whereas general information transmission of phonemes was more impaired in patients with high-frequency hearing loss.  相似文献   

11.
Earlier experiments have shown that when one or more speech sounds in a sentence are replaced by a noise meeting certain criteria, the listener mislocalizes the extraneous sound and believes he hears the missing phoneme(s) clearly. The present study confirms and extends these earlier reports of phonemic restorations under a variety of novel conditions. All stimuli had some of the context necessary for the appropriate phonemic restoration following the missing sound, and all sentences had the missing phoneme deliberately mispronounced before electronic deletion (so that the neighboring phonemes could not provide acoustic cues to aid phonemic restorations). The results are interpreted in terms of mechanisms normally aiding veridical perception of speech and nonspeech sounds.  相似文献   

12.
Ward J  Simner J 《Cognition》2003,89(3):237-261
This study documents an unusual case of developmental synaesthesia, in which speech sounds induce an involuntary sensation of taste that is subjectively located in the mouth. JIW shows a highly structured, non-random relationship between particular combinations of phonemes (rather than graphemes) and the resultant taste, and this is influenced by a number of fine-grained phonemic properties (e.g. allophony, phoneme ordering). The synaesthesia is not found for environmental sounds. The synaesthesia, in its current form, is likely to have originated during vocabulary acquisition, since it is guided by learned linguistic and conceptual knowledge. The phonemes that trigger a given taste tend to also appear in the name of the corresponding foodstuff (e.g. /I/, /n/ and /s/ can trigger a taste of mince /mIns/) and there is often a semantic association between the triggering word and taste (e.g. the word blue tastes "inky"). The results suggest that synaesthesia does not simply reflect innate connections from one perceptual system to another, but that it can be mediated and/or influenced by a symbolic/conceptual level of representation.  相似文献   

13.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

14.
The necessity of a “levels-of-processing” approach in the study of mental representations is illustrated by the work on the psychological reality of the phoneme. On the basis of both experimental studies of human behavior and functional imaging data, it is argued that there are unconscious representations of phonemes in addition to conscious ones. These two sets of mental representations are functionally distinct: the former intervene in speech perception and (presumably) production; the latter are developed in the context of learning alphabetic literacy for both reading and writing purposes. Moreover, among phonological units and properties, phonemes may be the only ones to present a neural dissociation at the macro-anatomic level. Finally, it is argued that even if the representations used in speech perception and those used in assembling and in conscious operations are distinct, they may entertain dependency relations.  相似文献   

15.
An attempt was made to examine the manner in which consonants and vowels are coded in short-term memory. under identical recall conditions. Ss were presented with sequences of consonant-vowel digrams for serial recall. Sequences were composed of randomly presented consonants paired with/a/ or randomly presented vowels paired with /d/. Halle’s distinctive feature system was used to generate predictions concerning the frequency of intrusion errors. among phonemes. These predictions were based on the assumption that phonemes are discriminated in memory in terms of their component distinctive features, so that intrusions should most frequently occur between phonemes sharing similar distinctive features. The analysis of intrusion errors revealed that each consonant and vowel phoneme was coded m short-term memory by a particular combination of distinctive features which differed from one phoneme to another. A given phoneme was coded by the same set of distinctive features regardless of the number of syllables in the sequence. However, distinctive feature theories were not able to predict the frequency of intrusion errors for phonemes presented in the middle serial positions of a sequence with 100% accuracy. The results of the experiment support the notion that consonant and vowel phonemes are coded in a similar manner in STM and that this coding involves the retention of a specific set of distinctive features for each phoneme.  相似文献   

16.
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are ‘segment-sized’ (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.  相似文献   

17.
Phonemic restoration is a powerful auditory illusion that arises when a phoneme is removed from a word and replaced with noise, resulting in a percept that sounds like the intact word with a spurious bit of noise. It is hypothesized that the configurational properties of the word impair attention to the individual phonemes and thereby induce perceptual restoration of the missing phoneme. If so, this impairment might be unlearned if listeners can process individual phonemes within a word selectively. Subjects received training with the potentially restorable stimuli (972 trials with feedback); in addition, the presence or absence of an attentional cue, contained in a visual prime preceding each trial, was varied between groups of subjects. Cuing the identity and location of the critical phoneme of each test word allowed subjects to attend to the critical phoneme, thereby inhibiting the illusion, but only when the prime also identified the test word itself. When the prime provided only the identity or location of the critical phoneme, or only the identity of the word, subjects performed identically to those subjects for whom the prime contained no information at all about the test word. Furthermore, training did not produce any generalized learning about the types of stimuli used. A limited interactive model of auditory word perception is discussed in which attention operates through the lexical level.  相似文献   

18.
Teinonen T  Aslin RN  Alku P  Csibra G 《Cognition》2008,108(3):850-855
Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138–1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237–247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347–357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204–220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/–/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy.  相似文献   

19.
言语产生的语音加工单元具有跨语言的特异性。在印欧语言中, 音位是语音加工的重要功能单元。音位指具体语言中能够区别意义的最小语音单位, 如“big”包含三个音位/b/, /i/, /g/。目前, 在汉语言语产生中, 对音位的研究较少。本项目拟采用事件相关电位技术, 对汉语言语产生中的音位加工进行探讨, 试图考察:在汉语言语产生中, 1)音位加工的心理现实性, 以及音位表征是否受第二语言、汉语拼音习得、拼音使用经验的影响?2)音位的加工机制是怎样的?具体而言, 音位加工的特异性、位置编码、组合方式、时间进程是怎样的?对这些问题的回答, 将有助于深化对汉语言语产生的认识, 为建立汉语言语产生计算模型提供基础; 为比较印欧语言与汉语在机制上的异同提供基础; 为制定汉语语音教育教学方法提供心理学依据。  相似文献   

20.
In previous research a discriminative relationship has been established between patterns of covert speech behavior and the phonemic system when processing continuous linguistic material. The goal of the present research was to be more analytic and pinpoint covert neuromuscular speech patterns when one processes specific instances of phonemes. Electromyographic (EMG) recording indicated that the lips are significantly active when visually processing the letter “P” (an instance of bilabial material), but not when processing the letter “T” or a nonlinguistic control (C) stimulus. Similarly, the tongue is significantly active when processing the letter “T” (an instance of lingual-alveolar material), but not when processing the letters “P” or “C.” It is concluded that the speech musculature covertly responds systematically as a function of class of phoneme being processed. These results accord with our model that semantic processing (“understanding”) occurs when the speech (and other) musculature interacts with linguistic regions of the brain. In the interactions phonetic coding is generated and transmitted through neuromuscular circuits that have cybernetic characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号