首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Mintz (2003) found that in English child-directed speech, frequently occurring frames formed by linking the preceding (A) and succeeding (B) word (A_x_B) could accurately predict the syntactic category of the intervening word (x). This has been successfully extended to French (Chemla, Mintz, Bernal, & Christophe, 2009). In this paper, we show that, as for Dutch (Erkelens, 2009), frequent frames in German do not enable such accurate lexical categorization. This can be explained by the characteristics of German including a less restricted word order compared to English or French and the frequent use of some forms as both determiner and pronoun in colloquial German. Finally, we explore the relationship between the accuracy of frames and their potential utility and find that even some of those frames showing high token-based accuracy are of limited value because they are in fact set phrases with little or no variability in the slot position.  相似文献   

4.
Several recent studies by Lehiste have reported that changes in fundamental frequency (F0) can serve as a cue to perceived vowel length and, furthermore, that the perceived lengthening of the vowel can influence perception of the voicing feature of stop consonants in syllable-final position. In Experiment 1, we replicated Lehiste’s basic results for stop consonants in final position. Experiment 2 extended these results to postvocalic fricatives. The final consonant in syllables of intermediate vowel duration was more often perceived as voiced when F0 was falling than when F0 was monotone. In Experiment 3, we examined the F0 contours produced by eight talkers before postvocalic stop consonants and fricatives in natural speech for minimal pairs of words differing in voicing. The amount of change of F0 over the vowel was no greater before voiced than voiceless consonants, suggesting that the earlier perceptual effects cannot be explained by appealing to regularities observed in the production of F0 contours in vowels preceding postvocalic consonants.  相似文献   

5.
Stewart ME  Ota M 《Cognition》2008,109(1):157-162
It has been claimed that Autism Spectrum Disorder (ASD) is characterized by a limited ability to process perceptual stimuli in reference to the contextual information of the percept. Such a connection between a nonholistic processing style and behavioral traits associated with ASD is thought to exist also within the neurotypical population albeit in a more subtle way. We examined this hypothesis with respect to auditory speech perception, by testing whether the extent to which phonetic categorization shifts to make the percept a known word (i.e., the 'Ganong effect') is weakened as a function of autistic traits in neurotypicals. Fifty-five university students were given the Autism-Spectrum Quotient (AQ) and a segment identification test using two word-to-nonword Voice Onset Time (VOT) continua (kiss-giss and gift-kift). A significant negative correlation was found between the total AQ score and the identification shift that occurred between the continua. The AQ score did not correlate with scores on separately administered VOT discrimination, auditory lexical decision, or verbal IQ, thus ruling out enhanced auditory sensitivity, slower lexical access or verbal intelligence as explanations of the AQ-related shift in phonetic categorization.  相似文献   

6.
The analysis of syllable and pause durations in speech production can provide information about the properties of a speaker's grammatical code. The present study was conducted to reveal aspects of this code by analyzing syllable and pause durations in structurally ambiguous sentences. In Experiments 1–6, acoustical measurements were made for a key syllabic segment and a following pause for 10 or more speakers. Each of six structural ambiguities, previously unrelated, involved a grammatical relation between the constituent following the pause and one of two possible constituents preceding the pause. The results showed lengthening of the syllabic segments and pauses for the reading in which the constituent following the pause was hierarchically dominated by the higher of the two possible preceding constituents in a syntactic representation. The effects were also observed, to a lesser extent, when the structurally ambiguous sentences were embedded in disambiguating paragraph contexts (Experiment 7). The results show that a single hierarchical principle can provide a unified account of speech timing effects for a number of otherwise unrelated ambiguities. This principle is superior to a linear alternative and provides specific inferences about hierarchical relations among syntactic constituents in speech coding.  相似文献   

7.
Speech perception can be viewed in terms of the listener’s integration of two sources of information: the acoustic features transduced by the auditory receptor system and the context of the linguistic message. The present research asked how these sources were evaluated and integrated in the identification of synthetic speech. A speech continuum between the glide-vowel syllables /ri/ and /li/ was generated by varying the onset frequency of the third formant. Each sound along the continuum was placed in a consonant-cluster vowel syllable after an initial consonant /p/, /t/, /s/, and /v/. In English, both /r/ and /l/ are phonologically admissible following /p/ but are not admissible following /v/. Only /l/ is admissible following /s/ and only /r/ is admissible following /t/. A third experiment used synthetic consonant-cluster vowel syllables in which the first consonant varied between /b/ and /d and the second consonant varied between /l/ and /r/. Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.  相似文献   

8.
9.
The assumption that listeners are unaware of the highly encoded acoustic properties which lead to phoneme identification is questioned in the present study. It was found that some subjects can make use of small differences in voice onset time when making within-category discriminations. Subjects who can use these auditory features do so both implicitly (in a phonetic match task) and deliberately (in a physical match task). Results also indicate that some type of parallel process model is needed to account for the processing of auditory and phonetic information.  相似文献   

10.
The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.  相似文献   

11.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

12.
Neuropsychology of timing and time perception   总被引:1,自引:0,他引:1  
Interval timing in the range of milliseconds to minutes is affected in a variety of neurological and psychiatric populations involving disruption of the frontal cortex, hippocampus, basal ganglia, and cerebellum. Our understanding of these distortions in timing and time perception are aided by the analysis of the sources of variance attributable to clock, memory, decision, and motor-control processes. The conclusion is that the representation of time depends on the integration of multiple neural systems that can be fruitfully studied in selected patient populations.  相似文献   

13.
The role of rhythm in the speech intelligibility of 18 hearing-impaired children, aged 15 years with hearing losses from 40 to 108 db, was investigated. Their perceptual judgement of visual rhythm sequences was superior to that of the hearing controls, but their scores were not correlated with their speech intelligibility.  相似文献   

14.
Dupoux E  de Gardelle V  Kouider S 《Cognition》2008,109(2):267-273
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).  相似文献   

15.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.  相似文献   

16.
Right-ear advantages of different magnitudes occur systematically in dichotic listening for different phoneme classes and for certain phonemes according to their syllabic position. Such differences cannot be accounted for in terms of a single mechanism unique to the left hemisphere. Instead, at least two mechanisms are needed. One such device appears to be involved in the auditory analysis of transitions and other aspects of the speech signal. This device appears to be engaged for speech and nonspeech sounds alike. The other mechanism, the more accustomed “speech processor”, appears to make all phonetic decisions in identifying the stimulus.  相似文献   

17.
The research investigates how listeners segment the acoustic speech signal into phonetic segments and explores implications that the segmentation strategy may have for their perception of the (apparently) context-sensitive allophones of a phoneme. Two manners of segmentation are contrasted. In one, listeners segment the signal into temporally discrete, context-sensitive segments. In the other, which may be consistent with the talker’s production of the segments, they partition the signal into separate, but overlapping, segments freed of their contextual influences. Two complementary predictions of the second hypothesis are tested. First, listeners will use anticipatory coarticulatory information for a segment as information for the forthcoming segment. Second, subjects will not hear anticipatory coarticulatory information as part of the phonetic segment with which it co-occurs in time. The first hypothesis is supported by findings on a choice reaction time procedure; the second is supported by findings on a 4IAX discrimination test. Implications of the findings for theories of speech production, perception, and of the relation between the two are considered.  相似文献   

18.
A contingent adaptation effect is reported for speech perception. Experiments were conducted to test the effects of an alternating sequence of two adapting syllables, [da] and [thi], on the perception of two series of synthetic speech syllables, [ba]-[pha] and [bi]-[phi]. Each of the test series consisted of 11 stimuli varying in voice onset time, a cue which distinguishes voiced from voiceless stop consonants in word-initial position. The [da]-[thi] adapting sequence produced opposite shifts in the loci of the phonetic boundaries for the two test series. For the [ba]-[pha] series, listeners made fewer identification responses to the [b] category after adaptation, while for the [bi]-[phi] series, listeners made more responses to the [b] category. The opposing shifts indicate that the perceptual analysis of voicing in stop consonants is carried out with respect to vowel environment.  相似文献   

19.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号