首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Phonological priming effects were examined in an auditory single-word shadowing task. In 6 experiments, target items were preceded by auditorily or visually presented, phonologically similar, word or nonword primes. Results revealed facilitation in response time when a target was preceded by a word or nonword prime having the same initial phoneme when the prime was auditorily presented but not when it was visually presented. Second, modality-independent interference was observed when the phonological overlap between the prime and target increased from 1 to 3 phonemes for word primes but not for nonword primes. Taken together, these studies suggest that phonological information facilitates word recognition as a result of excitation at a prelexical level and increases response time as a result of competition at a lexical level. These processes are best characterized by connectionist models of word recognition.  相似文献   

2.
The locus of semantic priming effects was examined by measuring onset and rime durations as well as response latencies of words with consistent and inconsistent pronunciations, using the postvocalic naming task. We found that the effect of a semantic prime on naming duration was localized rather than spread across the entire word; onset durations were shorter in the related condition than in the unrelated condition, but rime durations were equal in the two prime conditions. Moreover, the priming effect on onset durations was larger for words with inconsistent than for those with consistent pronunciations. These duration results cannot be accounted for by previous proposals, but they can be accounted for by models in which phonemes are activated in parallel rather than serially from left to right and in which motor programs are based on phonemes rather than syllables. Contrary to previous reports of an interaction of prime and regularity (a factor closely related to consistency) on naming latency, we found no interaction of prime and consistency on response latency. We argue that this conflict is only apparent and arises because naming latency conflates response latency and initial phoneme duration for some targets.  相似文献   

3.
A series of studies was undertaken to examine how rate normalization in speech perception would be influenced by the similarity, duration, and phonotactics of phonemes that were adjacent or distal from the initial, target phoneme. The duration of the adjacent (following) phoneme always had an effect on perception of the initial target. Neither phonotactics nor acoustic similarity seemed to have any influence on this rate normalization effect. However, effects of the duration of the nonadjacent (distal) phoneme were only found when that phoneme was temporally close to the target. These results suggest that there is a temporal window over which rate normalization occurs. In most cases, only the adjacent phoneme or adjacent two phonemes will fall within this window and thus influence perception of a phoneme distinction.  相似文献   

4.
In five experiments, we examined lexical competition effects using the phonological priming paradigm in a shadowing task. Experiments 1A and 1B replicate and extend Slowiaczek and Hamburger's (1992) observation that inhibitory effects occur when the prime and the target share the first three phonemes (e.g., /bRiz/-/bRik/) but not when they share the first two phonemes (e.g., /bRepsilonz/-/bRik/). This observation suggests that lexical competition depends on the length of the phonological match between the prime and the target. However, Experiment 2 revealed that an overlap of two phonemes is sufficient to cause an inhibitory effect provided that the primes mismatched the targets only on the last phoneme (e.g., /b[symbol: see text]l/-/b[symbol: see text]t/). Conversely, with a three-phoneme overlap, no inhibition was observed in Experiment 3 when the primes mismatched the targets on the last two phonemes (e.g., /bagepsilont/-/baga3/). In Experiment 4, an inhibitory effect was again observed when the primes mismatched the targets on the last phoneme but not when they mismatched the targets on the last two phonemes when the time between the offset of overlapping segments in the primes and the onset of overlapping segments in the targets was controlled for. The data thus indicate that what essentially determines prime-target competition effects in word-form priming is the number of mismatching phonemes.  相似文献   

5.
Phonemic restoration is a powerful auditory illusion that arises when a phoneme is removed from a word and replaced with noise, resulting in a percept that sounds like the intact word with a spurious bit of noise. It is hypothesized that the configurational properties of the word impair attention to the individual phonemes and thereby induce perceptual restoration of the missing phoneme. If so, this impairment might be unlearned if listeners can process individual phonemes within a word selectively. Subjects received training with the potentially restorable stimuli (972 trials with feedback); in addition, the presence or absence of an attentional cue, contained in a visual prime preceding each trial, was varied between groups of subjects. Cuing the identity and location of the critical phoneme of each test word allowed subjects to attend to the critical phoneme, thereby inhibiting the illusion, but only when the prime also identified the test word itself. When the prime provided only the identity or location of the critical phoneme, or only the identity of the word, subjects performed identically to those subjects for whom the prime contained no information at all about the test word. Furthermore, training did not produce any generalized learning about the types of stimuli used. A limited interactive model of auditory word perception is discussed in which attention operates through the lexical level.  相似文献   

6.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

7.
Resyllabification is a phonological process in which consonants are attached to syllables other than those from which they originally came. In four experiments, we investigated whether resyllabified words, such as "my bike is" pronounced as "mai.bai.kis," are more difficult to recognize than nonresyllabified words. Using a phoneme-monitoring task, we found that phonemes in resyllabified words were detected more slowly than those in nonresyllabified words. This difference increased when recognition of the carrier word was made more difficult. Acoustic differences between the target words themselves could not account for the results, because cross-splicing the resyllabified and nonresyllabified carrier words did not change the pattern. However, when nonwords were used as carriers, the effect disappeared. It is concluded that resyllabification increases the lexical-processing demands, which then interfere with phoneme monitoring.  相似文献   

8.
The processing of a target chord depends on the previous musical context in which it has appeared. This harmonic priming effect occurs for fine syntactic-like changes in context and is observed irrespective of the extent of participants' musical expertise (Bigand & Pineau, Perception and Psychophysics, 59 (1997) 1098). The present study investigates how the harmonic context influences the processing of phonemes in vocal music. Eight-chord sequences were presented to participants. The four notes of each chord were played with synthetic phonemes and participants were required to quickly decide whether the last chord (the target) was sung on a syllable containing the phoneme /i/ or /u/. The musical relationship of the target chord to the previous context was manipulated so that the target chord acted as a referential tonic chord or as a congruent but less structurally important subdominant chord. Phoneme monitoring was faster for the tonic chord than for the subdominant chord. This finding has several implications for music cognition and speech perception. It also suggests that musical and phonemic processing interact at some stage of processing.  相似文献   

9.
In three experiments, the processing of words that had the same overall number of neighbors but varied in the spread of the neighborhood (i.e., the number of individual phonemes that could be changed to form real words) was examined. In an auditory lexical decision task, a naming task, and a same-different task, words in which changes at only two phoneme positions formed neighbors were responded to more quickly than words in which changes at all three phoneme positions formed neighbors. Additional analyses ruled out an account based on the computationally derived uniqueness points of the words. Although previous studies (e.g., Luce & Pisoni, 1998) have shown that the number of phonological neighbors influences spoken word recognition, the present results show that the nature of the relationship of the neighbors to the target word--as measured by the spread of the neighborhood--also influences spoken word recognition. The implications of this result for models of spoken word recognition are discussed.  相似文献   

10.
Subjects monitored for the syllable-initial phonemes /b/ and /s/, as well as for the syllables containing those phonemes, in lists of nonsense syllables. Time to detect /b/ was a function of the amount of uncertainty as to the identity of the vowel following the target consonant; when uncertainty was low, no difference existed between phoneme and syllable monitoring latencies, but when uncertainty was high, syllables were detected faster than phonemes. Time to detect /s/ was independent of uncertainty concerning the accompanying vowel and was always slower than syllable detection. The role of knowledge of contexts in a phoneme-monitoring task as well as the relative availability of phonemic information to the listener in this task are discussed.  相似文献   

11.
An attempt was made to examine the manner in which consonants and vowels are coded in short-term memory. under identical recall conditions. Ss were presented with sequences of consonant-vowel digrams for serial recall. Sequences were composed of randomly presented consonants paired with/a/ or randomly presented vowels paired with /d/. Halle’s distinctive feature system was used to generate predictions concerning the frequency of intrusion errors. among phonemes. These predictions were based on the assumption that phonemes are discriminated in memory in terms of their component distinctive features, so that intrusions should most frequently occur between phonemes sharing similar distinctive features. The analysis of intrusion errors revealed that each consonant and vowel phoneme was coded m short-term memory by a particular combination of distinctive features which differed from one phoneme to another. A given phoneme was coded by the same set of distinctive features regardless of the number of syllables in the sequence. However, distinctive feature theories were not able to predict the frequency of intrusion errors for phonemes presented in the middle serial positions of a sequence with 100% accuracy. The results of the experiment support the notion that consonant and vowel phonemes are coded in a similar manner in STM and that this coding involves the retention of a specific set of distinctive features for each phoneme.  相似文献   

12.
The purpose of the current study was to examine blending and segmenting of phonemes as an instance of small, textual response classes that students learn to combine to produce whole word reading. Using an A/B/A/B design, a phoneme segmenting and blending condition that included differential reinforcement for response classes at the level of phonemes was compared to a control condition which was equated for differential reinforcement of reading words and opportunities to respond. The critical difference between conditions was the size of the responses that were brought under stimulus control (phonemes versus whole words). Findings clearly supported the superiority of the phoneme blending treatment condition over the control condition in producing generalized increases in word reading. The results are discussed in terms of the behavioral mechanisms that govern early literacy behaviors and the essential role that targeting measured increases in academic responses plays in furthering our understanding of how to improve the analysis and instruction of students who need to learn these important skills.  相似文献   

13.
To write a language, one must first abstract the unit to be used from the acoustic stream of speech. Writing systems based on the meaningless units, syllables and phonemes, were late developments in the history of written language. The alphabetic system, which requires abstraction of the phonemic unit of speech, was the last to appear, evolved from a syllabary and, unlike the other systems, was apparently invented only once. It might therefore be supposed that phoneme segmentation is particularly difficult and more difficult, indeed, than syllable segmentation. Speech research suggests reasons why this may be so. The present study provides direct evidence of a similar developmental ordering of syllable and phoneme segmentation abilities in the young child. By means of a task which required preschool, kindergarten, and first-grade children to tap out the number of segments in spoken utterances, it was found that, though ability in both syllable and phoneme segmentation increased with grade level, analysis into phonemes was significantly harder and perfected later than analysis into syllables. The relative difficulties of the different units of segmentation are discussed in relation to reading acquisition.  相似文献   

14.
In this paper, we propose a new version of the phoneme monitoring task that is well-suited for the study of lexical processing. The generalized phoneme monitoring (GPM) task, in which subjects detect target phonemes appearing anywhere in the test words, was shown to be sensitive to associative context effects. In Experiment 1, using the standard phoneme monitoring procedure in which subjects detect only word-initial targets, no effect of associative context was obtained. In contrast, clear context effects were observed in Experiment 2, which used the GPM task. Subjects responded faster to word-initial and word-medial targets when the target-bearing words were preceded by an associatively related word than when preceded by an unrelated one. The differential effect of context in the two versions of the phoneme monitoring task was interpreted with reference to task demands and their role in directing selective attention. Experiment 3 showed that the size of the context effect was unaffected by the proportion of related words in the experiment, suggesting that the observed effects were not due to subject strategies.  相似文献   

15.
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are ‘segment-sized’ (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.  相似文献   

16.
This study tested the hypothesis that the sonority of phonemes (a sound's relative loudness compared to other sounds with the same length, stress, and pitch) influences children's segmentation of syllable constituents. Two groups of children, first graders and preschoolers, were assessed for their awareness of phonemes in coda and onset positions, respectively, using different phoneme segmentation tasks. Although the trends for the first graders were more robust than the trends for the preschoolers, phoneme segmentation in the two groups correlated with the sonority levels of phonemes, regardless of phoneme position or task. These results, consistent with prior studies of adults, suggest that perceptual properties, such as sonority levels, greatly influence the development of phoneme awareness.  相似文献   

17.
Ss classified as quickly as possible stimuli back-projected one at a time on a small screen by pressing one of two levers in response to each stimulus, according to the levels of a single specified binary stimulus dimension. Stimuli were rectangles varying in height alone, in width alone, or in both dimensions, in either a correlated or an orthogonal fashion. Stimuli followed responses by a fixed interval of 82, 580, or 1,080 msec. Response time was longer when both dimensions varied orthogonally than when only one dimension varied, indicating that Ss were unable to avoid perceiving the rectangle figures as wholes. Repeated stimuli were responded to more quickly than stimuli which were different from the immediately preceding stimulus in all conditions. With orthogonally combined dimensions, response time to stimulus repetitions was lowest, increased when the stimulus changed while the response was repeated, and increased still further when both the stimulus and the response changed. Increasing the time interval between stimuli decreased response time for nonrepetitions, while response time for repetitions was relatively unaffected. The results were discussed in terms of two models of serial choice reaction time.  相似文献   

18.
Ten left hemisphere and 10 right hemisphere CVA patients were required to monitor for the target phoneme /b/ in a series of monosyllabic words, and a target timbre in a series of eight different timbres. Left hemisphere damaged aphasic patients were more accurate for target timbres over phonemes; the reverse pattern was found in the nonaphasic right hemisphere patients.  相似文献   

19.
The place of articulation of intervocalic stop consonants is conveyed by temporally distributed spectral information, viz, the formant transitions preceding and following the silent closure interval (VC and CV transitions). Experiment 1 shows that more than 200 msec of silent closure is needed to hear VC and CV formant transitions as separate phonemic events (geminate stops). As closure duration is reduced, these cues are integrated into a single phonemic percept, and the VC transitions become increasingly redundant (Experiments 2 and 3). VC and CV transitions conveying different places of articulation, on the other hand, are heard as separate phonemes at closure durations as short as 100 msec. If closure duration is further reduced, a single stop is heard whose place of articulation corresponds to the CV transitions (Experiment 3). Even in the absence of CV transitions, VC transitions carry little perceptual weight at very short closure durations (Experiment 4). Despite their apparent redundancy, however, the VC transitions exert a positive bias on the perception of CV transitions at very short closure durations. At closure durations beyond 100 msec, on the other hand, VC and CV transitions interact contrastively in perception and tend to be heard as different phonemes (Experiments 5 and 6). The results of these experiments suggest two different processes of temporal integration in phonetic perception, one taking place at a precategorical level, the other combining identical phoneme categories within a certain time span.  相似文献   

20.
We investigated whether musical competence was associated with the perception of foreign-language phonemes. The sample comprised adult native-speakers of English who varied in music training. The measures included tests of general cognitive abilities, melody and rhythm perception, and the perception of consonantal contrasts that were phonemic in Zulu but not in English. Music training was associated positively with performance on the tests of melody and rhythm perception, but not with performance on the phoneme-perception task. In other words, we found no evidence for transfer of music training to foreign-language speech perception. Rhythm perception was not associated with the perception of Zulu clicks, but such an association was evident when the phonemes sounded more similar to English consonants. Moreover, it persisted after controlling for general cognitive abilities and music training. By contrast, there was no association between melody perception and phoneme perception. The findings are consistent with proposals that music- and speech-perception rely on similar mechanisms of auditory temporal processing, and that this overlap is independent of general cognitive functioning. They provide no support, however, for the idea that music training improves speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号