首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using Cantonese language as a testing case in the present study. A word-spotting task was used in which listeners were instructed to spot any Cantonese word from a series of nonsense sound sequences. We found that it was easier for the native Cantonese listeners to spot the target word in the nonsense sound sequences with high transitional probability phoneme combinations than those with low transitional probability phoneme combinations. These results concluded that native Cantonese listeners did make use of the transitional probability information to recognize the spoken word in speech.  相似文献   

2.
The question of whether Dutch listeners rely on the rhythmic characteristics of their native language to segment speech was investigated in three experiments. In Experiment 1, listeners were induced to make missegmentations of continuous speech. The results showed that word boundaries were inserted before strong syllables and deleted before weak syllables. In Experiment 2, listeners were required to spot real CVC or CVCC words (C = consonant, V = vowel) embedded in bisyllabic nonsense strings. For CVCC words, fewer errors were made when the second syllable of the nonsense string was weak rather than strong, whereas for CVC words the effect was reversed. Experiment 3 ruled out an acoustic explanation for this effect. It is argued that these results are in line with an account in which both metrical segmentation and lexical competition play a role.  相似文献   

3.
The study is based on an on-line investigation of spoken language comprehension processes in 25 French-speaking aphasics using a syllable-monitoring task. Nonsense syllables were presented in three different conditions: context-free (embedded in strings of nonsense syllables), lexical context (where the target nonsense syllable is the initial, medial, or final syllable of real three-syllable words), and sentence context. This study builds on an earlier one that explored the relationship between the acoustic-phonetic, lexical, and sentential levels of spoken language processing in French-speaking normals and gave evidence of top-down lexical and sentential influence on syllable recognition. In the present study, aphasic patients from various diagnostic categories were classified as high (N = 13) or low (N = 12) comprehenders. The results show that low comprehending aphasics make no use of sentence information in the syllable-recognition task. As for top-down effect at the single word level that is observed in normal listeners. However, a subgroup analysis shows that the Broca's are the only high comprehending aphasics who perform in the same way as normal listeners; this sets them apart from the anomics and conduction aphasics.  相似文献   

4.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

5.
The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.  相似文献   

6.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

7.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).  相似文献   

8.
We propose that word recognition in continuous speech is subject to constraints on what may constitute a viable word of the language. This Possible-Word Constraint (PWC) reduces activation of candidate words if their recognition would imply word status for adjacent input which could not be a word - for instance, a single consonant. In two word-spotting experiments, listeners found it much harder to detectapple,for example, infapple(where [f] alone would be an impossible word), than invuffapple(wherevuffcould be a word of English). We demonstrate that the PWC can readily be implemented in a competition-based model of continuous speech recognition, as a constraint on the process of competition between candidate words; where a stretch of speech between a candidate word and a (known or likely) word boundary is not a possible word, activation of the candidate word is reduced. This implementation accurately simulates both the present results and data from a range of earlier studies of speech segmentation.  相似文献   

9.
Two experiments investigated speech motor planning in aphasia by contrasting the degree of labial and lingual anticipatory coarticulation evident in normal subjects' speech with that found in the speech of aphasic subjects. In the first experiment, Linear Predictive Coding (LPC) analyses were conducted for the initial consonants of CV [si su ti tu ki ku] and CCV [sti stu ski sku] productions by 6 normal and 10 aphasic (5 anterior, 5 posterior) subjects. For normal subjects' productions, reliable coarticulatory shift was found for almost all measurements, indicating that acoustic correlates for anticipatory coarticulation obtain for [s], [t], and [k] in a prevocalic environment, as well as when [s] is the initial consonant of a CCV syllable. The data for the aphasic subjects were statistically indistinguishable from those of the normal subject group, and there were no differences noted as a function of aphasia type. In the second experiment, a subset of the consonantal stimuli produced by the normal and aphasic subjects was presented to a group of 10 naive listeners for a vowel identification task. Listeners were able to identify the productions of all subjects at a level well above chance. In addition, small but statistically significant Group differences were observed, with the [sV], [skV], and [tV] productions by anterior aphasics showing significantly lower perceptual scores than those of normal subjects.  相似文献   

10.
In three experiments, we determined how perception of the syllable-initial distinction between the stop consonant [b] and the semivowel [w], when cued by duration of formant transitions, is affected by parts of the sound pattern that occur later in time. For the first experiment, we constructed four series of syllables, similar in that each had initial formant transitions ranging from one short enough for [ba] to one long enough for [wa], hut different in overall syllable duration. The consequence in perception was that, as syllable duration increased, the [b-w] boundary moved toward transitions of longer duration. Then, in the second experiment, we increased the duration of the sound by adding a second syllable, [da], (thus creating [bada-wada]), and observed that lengthening the second syllable also shifted the perceived [b-w] boundary in the first syllable toward transitions of longer duration; however, this effect was small by comparison with that produced when the first syllable was lengthened equivalently. In the third experiment, we found that altering the structure of the syllable had an effect that is not to be accounted for by the concomitant change in syllable duration: lengthening the syllable by adding syllable-final transitions appropriate for the stop consonant [d] (thus creating [bad-wad]) caused the perceived [b-w] boundary to shift toward transitions of shorter duration, an effect precisely opposite to that produced when the syllable was lengthened to the same extent by adding steady-state vowel. We suggest that, in all these cases, the later-occurring information specifies rate of articulation and that the effect on the earlier-occurring cue reflects an appropriate perceptual normalization.  相似文献   

11.
The ability of English speakers to monitor internally and externally generated words for syllables was investigated in this paper. An internal speech monitoring task required participants to silently generate a carrier word on hearing a semantically related prompt word (e.g., reveal—divulge). These productions were monitored for prespecified target strings that were either a syllable match (e.g., /dai/), a syllable mismatch (e.g., /daiv/), or unrelated (e.g., /hju:/) to the initial syllable of the word. In all three experiments the longer target sequence was monitored for faster. However, this tendency reached significance only when the longer string also matched a syllable in the carrier word. External speech versions of each experiment were run that yielded a similar influence of syllabicity but only when the syllable match string also had a closed structure. It was concluded that any influence of syllabicity found using either task reflected the properties of a shared perception-based monitoring system.  相似文献   

12.
The speech of ten stutterers, recorded while they were listening to a concurrent metronome click and with a click which was triggered so that it occurred at the beginning of every syllable, was assessed. Speech rate and number of disfluencies were measured in each speaking condition. The results indicated that fewer disfluencies occurred (relative to normal reading performance) when a metronome or syllable-initial click was presented.In a perceptual experiment, a group of normal listeners was presented with samples of each of the stutterers' speech from all three speaking conditions. The listeners were asked to choose that which sounded most natural. The speech recorded from the condition where a click occurred at syllable onset was judged more natural than normal speech or speech recorded while the stutterers heard a metronome click.  相似文献   

13.
We investigated the conditions under which the [b]-[w] contrast is processed in a contextdependent manner, specifically in relation to syllable duration. In an earlier paper, Miller and Liberman (1979) demonstrated that when listeners use transition duration to differentiate [b] from [w], they treat it in relation to the duration of the syllable: As syllables from a [ba]-[wa] series varying in transition duration become longer, so, too, does the transition duration at the [b]-[w] perceptual boundary. In a subsequent paper, Shinn, Blumstein, and Jongman (1985) questioned the generality of this finding by showing that the effect of syllable duration is eliminated for [ba]-[wa] stimuli that are less schematicthan those used by Miller and Liberman. In the present investigation, we demonstrated that when these “more natural” stimuli are presented in a multitalker babble noise instead of in quiet (as was done by Shinn et al.), the syllable-duration effect emerges. Our findings suggest that the syllable-duration effect in particular, and context effects in general, may play a more important role in speech perception than Shinn et al. suggested.  相似文献   

14.
The present investigation addresses the possible utility of sequential probabilities in the segmentation of spoken language. In a series of five word- spotting and two control lexical decision experiments, high- versus low-probability consonant-vowel (Experiments 1, 2, 5, and 7) and vowel-consonant (Experiments 1, 3, 4, and 6) strings were presented either in the nonsense contexts of target words (Experiments 1–3) or within the target words themselves (Experiments 4–7). The results suggest that listeners, at least for sequences in the onset position, indeed use sequential probabilities as cues for segmentation. The probability of a sound sequence influenced segmentation more when the sequence occurred within the target words (Experiments 4–7 vs. Experiments 1–3). Furthermore, the effects were reliable only when the sequences occurred in the onset position (Experiments 1, 2, 5, and 7 vs. Experiments 1, 3, 4, and 6).  相似文献   

15.
The present investigation addresses the possible utility of sequential probabilities in the segmentation of spoken language. In a series of five word- spotting and two control lexical decision experiments, high- versus low-probability consonant-vowel (Experiments 1, 2, 5, and 7) and vowel-consonant (Experiments 1, 3, 4, and 6) strings were presented either in the nonsense contexts of target words (Experiments 1-3) or within the target words themselves (Experiments 4-7). The results suggest that listeners, at least for sequences in the onset position, indeed use sequential probabilities as cues for segmentation. The probability of a sound sequence influenced segmentation more when the sequence occurred within the target words (Experiments 4-7 vs. Experiments 1-3). Furthermore, the effects were reliable only when the sequences occurred in the onset position (Experiments 1, 2, 5, and 7 vs. Experiments 1, 3, 4, and 6).  相似文献   

16.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

17.
We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, "chicory"; witlos is meaningless) subsequently categorized more sounds on an [epsilonf]-[epsilons] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.  相似文献   

18.
Two experiments are reported which attempt to assess the effects of variations in target word, context items and instructions on performance in a visual search task. In Experiment 1, subjects were required to search through context lists of three-letter nonsense syllables (of either high or low association value) for three-letter meaningful target words (of either high or low frequency). They were given either “positive” or “negative” instructions, i.e. were told either to pick out the meaningful word or to pick out the word which was not a nonsense syllable. The results showed that visual search times were significantly influenced by both frequency of target word and association value of context items. A significant interaction was observed between type of instructions and target word frequency. The design of Experiment 2 followed that of Experiment 1, with the exceptions that nonsense syllables now became target items, and meaningful words formed the contexts. Again, nonsense syllable association value and word frequency were found to be critical in determining visual search times.  相似文献   

19.
English exhibits compensatory shortening, whereby a stressed syllable followed by an unstressed syllable is measured to be shorter than the same stressed syllable alone. This anticipatory shortening is much greater than backward shortening, whereby an unstressed syllable is measured to shorten a following stressed syllable. We speculated that measured shortening reflects not true shortening, but coarticulatory hiding. Hence, we asked whether listeners are sensitive to parts of stressed syllables hidden by following or preceding unstressed syllables. In two experiments (Experiments 1A and 1B), we found the point of subjective equality—that is, the durational difference between a stressed syllable in isolation and one followed by an unstressed syllable—at which listeners cannot tell which is longer. In a third experiment (Experiment 2), we found the point of subjective equality for stressed monosyllables and disyllables with a weak-strong stress pattern. In all of the experiments, the points of subjective equality occurred when stressed syllables in disyllables were measured to be shorter than those in monosyllables, as if the listeners heard the coarticulatory onset or the continuation of a stressed syllable within unstressed syllables.  相似文献   

20.
Duplex perception occurs when the phonetically distinguishing transitions of a syllable are presented to one ear and the rest of the syllable (the “base”) is simultaneously presented to the other ear. Subjects report hearing both a nonspeech “chirp” and a speech syllable correctly cued by the transitions. In two experiments, we compared phonetic identification of intact syllables, duplex percepts, isolated transitions, and bases. In both experiments, subjects were able to identify the phonetic information encoded into isolated transitions in the absence of an appropriate syllabic context. Also, there was no significant difference in phonetic identification of isolated transitions and duplex percepts. Finally, in the second experiment, the category boundaries from identification of isolated transitions and duplex percepts were not significantly different from each other. However, both boundaries were statistically different from the category boundary for intact syllables. Taken together, these results suggest that listeners do not need to perceptually integrate F2 transitions or F2 and F3 transition pairs with the base in duplex perception. Rather, it appears that listeners identify the chirps as speech without reference to the base.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号