首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
During much of the past century, it was widely believed that phonemes—the human speech sounds that constitute words—have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature—namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts.  相似文献   

2.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

3.
聋人能利用视觉音素意识解码词汇语音。与健听人音位位置效应一致,聋人对汉字声母更敏感,但双字词中是否也有类似的声母优势还不清楚。采用音位识别任务探讨聋人双字词识别的声母优势及其原因,研究一利用编码方式考察指拼的作用,发现了声母识别优势,表明聋人能利用字母和指拼两种方式解码双字词语音且编码方式不影响声母优势;研究二进一步利用汉字位置探讨音位序列加工对声母识别优势的作用,结果发现了声母优势、首字优势及首字声母优势,表明聋人识别双字词音位在汉字水平、汉字音节内水平都遵循从左至右的序列加工,与健听人一致,同时还受指拼声母的特殊影响。整个研究表明,聋人识别汉字语音建立在视觉音素意识基础上,双字词的声母识别优势受指拼声母强化、双字词汉字位置效应、音节内音位位置效应的共同作用。  相似文献   

4.
Research employing three large lists of words rated along emotional dimensions (total N = 15,761 words) supported a prior claim that most phonemes have a distinct emotional character. Different phonemes tended to occur more often in different types of emotional words. When phonemes were grouped along eight radii in a two dimensional emotional space defined by Pleasantness and Activation (Pleasantness, Cheeriness, Activation, Nastiness, Unpleasantness, Sadness, Passivity, and Softness), it became possible to draw profiles of texts in terms of their preferential use of different classes of phonemes. Four experiments were performed to illustrate the manner in which phonemes in nonsense words are related to emotion, and evidence of the validity of character assignments was investigated and received support in three further analyses. The emotionality of phonemes was related to both place and manner of articulation and to properties of the auditory signal itself. Phonoemotional profiles were drawn for several types of material and provided supporting evidence for the validity of the assignment of emotional character to phonemes.  相似文献   

5.
In this paper, we propose a new version of the phoneme monitoring task that is well-suited for the study of lexical processing. The generalized phoneme monitoring (GPM) task, in which subjects detect target phonemes appearing anywhere in the test words, was shown to be sensitive to associative context effects. In Experiment 1, using the standard phoneme monitoring procedure in which subjects detect only word-initial targets, no effect of associative context was obtained. In contrast, clear context effects were observed in Experiment 2, which used the GPM task. Subjects responded faster to word-initial and word-medial targets when the target-bearing words were preceded by an associatively related word than when preceded by an unrelated one. The differential effect of context in the two versions of the phoneme monitoring task was interpreted with reference to task demands and their role in directing selective attention. Experiment 3 showed that the size of the context effect was unaffected by the proportion of related words in the experiment, suggesting that the observed effects were not due to subject strategies.  相似文献   

6.
In three experiments, the processing of words that had the same overall number of neighbors but varied in the spread of the neighborhood (i.e., the number of individual phonemes that could be changed to form real words) was examined. In an auditory lexical decision task, a naming task, and a same-different task, words in which changes at only two phoneme positions formed neighbors were responded to more quickly than words in which changes at all three phoneme positions formed neighbors. Additional analyses ruled out an account based on the computationally derived uniqueness points of the words. Although previous studies (e.g., Luce & Pisoni, 1998) have shown that the number of phonological neighbors influences spoken word recognition, the present results show that the nature of the relationship of the neighbors to the target word--as measured by the spread of the neighborhood--also influences spoken word recognition. The implications of this result for models of spoken word recognition are discussed.  相似文献   

7.
The purpose of the current study was to examine blending and segmenting of phonemes as an instance of small, textual response classes that students learn to combine to produce whole word reading. Using an A/B/A/B design, a phoneme segmenting and blending condition that included differential reinforcement for response classes at the level of phonemes was compared to a control condition which was equated for differential reinforcement of reading words and opportunities to respond. The critical difference between conditions was the size of the responses that were brought under stimulus control (phonemes versus whole words). Findings clearly supported the superiority of the phoneme blending treatment condition over the control condition in producing generalized increases in word reading. The results are discussed in terms of the behavioral mechanisms that govern early literacy behaviors and the essential role that targeting measured increases in academic responses plays in furthering our understanding of how to improve the analysis and instruction of students who need to learn these important skills.  相似文献   

8.
Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.  相似文献   

9.
We explore the recent finding (Newman & Dell, 1978) that the time needed to detect a target phoneme in a phoneme monitoring task is increased when the preceding word contains a phoneme similar to the target. Normal adult native speakers of English monitored auditorily presented sentences and responded as quickly as possible whenever they detected a specified phoneme. We found that preceding word-initial phonemes, despite being processed more quickly, increased the response latency to the following target phoneme more than did preceding word-medial phonemes. There was also an increase in response latency even when the subject could be highly certain that the similar preceding phoneme was not an instance of the target phoneme. We argue that the interference effects are due to fundamental characteristics of perceptual processing and that more time is needed to categorize the target phoneme. We present a computer simulation using an interactive activation model of speech perception to demonstrate the plausibility of our explanation.  相似文献   

10.
Rhyme, rime, and the onset of reading   总被引:6,自引:0,他引:6  
There is recent evidence that children naturally divide syllables into the opening consonant or consonant cluster (the onset) and the rest of the syllable (the rime). This suggests an explanation for the fact that preschool children are sensitive to rhyme, but often find tasks in which they have to isolate single phonemes extremely difficult. Words which rhyme share a common rime and thus can be categorized on that speech unit. Single phonemes on the other hand may only be part of one of these speech units. This analysis leads to some clear predictions. Young children, even children not yet able to read, should manage to categorize words on the basis of a single phoneme when the phoneme coincides with the word's onset ("cat," "cup") but not when it is only part of the rime ("cat," "pit"). They should find it easier to work out that two monosyllabic words have a common vowel which is not shared by another word when all three words end with the same consonant ("lip," "hop," "tip") but the odd word has a different rime than when the three words all start with the same consonant ("cap," "can," "cot") and thus all share the same onset. The hypothesis also suggests that children should be aware of single phonemes when these coincide with the onset before they learn to read. We tested these predictions in two studies of children aged 5, 6, and 7 years. The results clearly support these predictions.  相似文献   

11.
Preschool children's ability to segment and blend real words and nonsense words, with and without consonant clusters was investigated in two experiments. In the first experiment, preschool children's ability to segment real words into phonemes was examined. Readers performed better than nonreaders on a phoneme counting task and words containing consonant clusters were harder to segment compared to words without consonant clusters. In experiment two, the ability to segment and blend nonsense words was investigated. Nonreaders had significantly more difficulty with nonsense words compared to readers in both a phoneme synthesis and a phoneme analysis task. A two-way interaction between reading level and word type showed that nonsense words containing consonant clusters were particularly difficult for nonreaders. The results were discussed in relation to theories suggesting that syllables consist of an onset and a rime.  相似文献   

12.
Speech perception without hearing   总被引:6,自引:0,他引:6  
In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH; n = 96) and with severely to profoundly impaired hearing (IH; n = 72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.  相似文献   

13.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

14.
《Cognition》1986,22(3):259-282
Predictions derived from the Cohort Model of spoken word recognition were tested in four experiments using an auditory lexical decision task. The first experiment produced results that were compatible with the model, in that the point at which a word could be uniquely identified appeared to influence reaction times. The second and third experiments demonstrated that the processing of a nonword phoneme string continues after the point at which there are no possible continuations that would make a word. The number of phonemes following the point of deviation from a word was shown to affect reaction times, as well as the similarity of the nonword to a word. The final experiment demonstrated a frequency effect when high and low frequency words were matched on their point of unique identity. These last three results are not consistent with the Cohort Model and so an alternative account is put forward. According to this account, the first few phonemes are used to activate all words beginning with those phonemes and then these candidates are checked back to the original stimulus. This model provides greater flexibility than the Cohort Model and allows for mispronounced and misperceived words to be correctly recognized.  相似文献   

15.
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face recognition to visual word recognition; the model implements a theory of hemispheric asymmetry in perception that posits low spatial frequency biases in the right hemisphere and high spatial frequency (HSF) biases in the LH. We show two factors that can influence lateralization: (a) Visual similarity among words: The more similar the words in the lexicon look visually, the more HSF/LH processing is required to distinguish them, and (b) Requirement to decompose words into graphemes for grapheme‐phoneme mapping: Alphabetic reading (involving grapheme‐phoneme conversion) requires more HSF/LH processing than logographic reading (no grapheme‐phoneme mapping). These factors may explain the difference in lateralization between English and Chinese orthographic processing.  相似文献   

16.
In the current study, late Chinese–English bilinguals performed a facial expression identification task with emotion words in the task-irrelevant dimension, in either their first language (L1) or second language (L2). The investigation examined the automatic access of the emotional content in words appearing in more than one language. Significant congruency effects were present for both L1 and L2 emotion word processing. Furthermore, the magnitude of emotional face-word Stroop effect in the L1 task was greater as compared to the L2 task, indicating that in L1 participants could access the emotional information in words in a more reliable manner. In summary, these findings provide more support for the automatic access of emotional information in words in the bilinguals’ two languages as well as attenuated emotionality of L2 processing.  相似文献   

17.
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are ‘segment-sized’ (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.  相似文献   

18.
Children (4 to 6 years of age) were taught to associate printed 3- or 4-letter abbreviations, or cues, with spoken words (e.g., bfr for beaver). All but 1 of the letters in the cue corresponded to phonemes in the spoken target word. Two types of cues were constructed: phonetic cues, in which the medial letter was phonetically similar to the target word, and control cues, in which the central phoneme was phonetically dissimilar. In Experiment 1, children learned the phonetic cues better than the control cues, and learning correlated with measures of phonological skill and knowledge of the meanings of the words taught. In Experiment 2, the target words differed on a semantic variable-imageability-and learning was influenced by both the phonetic properties of the cue and the imageability of the words used.  相似文献   

19.
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language’s writing system and the assimilation of phonemes according to the language’s constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes produced by grapheme-phoneme conversion when read. In two experiments, each involving 40 fluent readers, we compared visual lexical decision on Korean orthographic forms that would require such a change (C stimuli) or not (NC stimuli). We found that NC words were accepted faster than C words, and C nonwords were rejected faster than NC nonwords. The results suggest that phoneme-to-phoneme transformations involved in uttering a word may also be involved in visually identifying the word.  相似文献   

20.
An essential function of language processing is serial order control. Computational models of serial ordering and empirical data suggest that plan representations for ordered output of sound are governed by principles related to similarity. Among these principles, the temporal distance and edge principles at a within-word level have not been empirically demonstrated separately from other principles. Specifically, the temporal distance principle assumes that phonemes that are in the same word and thus temporally close are represented similarly. This principle would manifest as phoneme movement errors within the same word. However, such errors are rarely observed in English, likely reflecting stronger effects of syllabic constraints (i.e., phonemes in different positions within the syllable are distinctly represented). The edge principle assumes that the edges of a sequence are represented distinctly from other elements/positions. This principle has been repeatedly observed as a serial position effect in the context of phonological short-term memory. However, it has not been demonstrated in single-word production. This study provides direct evidence for the two abovementioned principles by using a speech-error induction technique to show the exchange of adjacent morae and serial position effects in Japanese four-mora words. Participants repeatedly produced a target word or nonword, immediately after hearing an aurally presented distractor word. The phonologically similar distractor words, which were created by exchanging adjacent morae in the target, induced adjacent-mora-exchange errors, demonstrating the within-word temporal distance principle. There was also a serial position effect in error rates, such that errors were mostly induced at the middle positions within a word. The results provide empirical evidence for the temporal distance and edge principles in within-word serial order control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号