首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Under some conditions, visual information can override auditory information to the extent that identification judgments of a visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiovisually compatible syllables are phonetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked to match an audio syllable/va/either to an audiovisually consistent syllable (audio/va/-video/fa/) or an audiovisually discrepant syllable (audio/ba/-video/fa/). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio/va/ to the audiovisually consistent/va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

2.
3.
English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a /b/ vs. /g/ contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled 'bowce' or 'gowce' and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.  相似文献   

4.
Auditory phoneme categories are less well-defined in developmental dyslexic readers than in fluent readers. Here, we examined whether poor recalibration of phonetic boundaries might be associated with this deficit. 22 adult dyslexic readers were compared with 22 fluent readers on a phoneme identification task and a task that measured phonetic recalibration by lipread speech (Bertelson, Vroomen, & De Gelder, 2003). In line with previous reports, we found that dyslexics were less categorical in the labeling of the speech sounds. The size of their phonetic recalibration effect, though, was comparable to that of normal readers. This result indicates that phonetic recalibration is unaffected in dyslexic readers, and that it is unlikely to lie at the foundation of their auditory phoneme categorization impairments. For normal readers however, it appeared that a well-calibrated system is related to auditory precision as the steepness of the auditory identification curve positively correlated with recalibration.  相似文献   

5.
Subcategorical phonetic mismatches and lexical access.   总被引:1,自引:0,他引:1  
The place of phonetic analysis in the perception of words is unclear. While some theories assume fully specified phonemic strings as input, other theories assume that little analysis occurs. An earlier experiment by Streeter and Nigro (1979) produced evidence, based on auditorily presented words with misleading acoustic cues, that lexical decisions were based on mostly unanalyzed patterns, since word judgments were delayed by misleading information whereas nonword judgments were not. The present studies expand that work to a different set of cues, and to cases in which the overriding cue came first. An additional task, auditory naming, was used to examine the effects when the decision stage is less demanding. For the lexical decision task, misleading information slowed the responses, for both words and nonwords. In the auditory naming task, only the slower responses were affected. These results suggest that phonetic conflicts are resolved prior to lexical access.  相似文献   

6.
The place of phonetic analysis in the perception of words is unclear. While some theories assume fully specified phonemic strings as input, other theories assume that little analysis occurs. An earlier experiment by Streeter and Nigro (1979) produced evidence, based on auditorily presented words with misleading acoustic cues, that lexical decisions were based on mostly unanalyzed patterns, since word judgments were delayed by misleading information whereas non word judgments were not. The present studies expand that work to a different set of cues, and to cases in which the overriding cue came first. An additional task, auditory naming, was used to examine the effects when the decision stage is less demanding. For the lexical decision task, misleading information slowed the responses, for both words and nonwords. In the auditory naming task, only the slower responses were affected. These results suggest that phonetic conflicts are resolved prior to lexical access.  相似文献   

7.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

8.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Undersome conditions, Vi8Ual information can override auditory information to the extent that identification judgments of a-visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiuvisually-compatible syllables-are-phictnetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked tomatch an audio syllable /val either to an audiovisually consistent syllable (audio /val-video /fa/) or an audiovisually discrepant syllable (audio /bs/-video ifa!). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio /va/ to the audiovisually consistent /va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

9.
Phonetic categorization in auditory word perception   总被引:5,自引:0,他引:5  
To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.  相似文献   

10.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

11.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

12.
来自双语的研究表明,双语者在抑制、转换、注意力维持等执行控制功能方面较之于单语者具有显著的认知优势效应。该效应是双语者对诸多语言亚成分协同控制的结果,还是对不同语音体系控制的结果,目前依然存在争论。本研究以单语单言儿童(仅会说汉语普通话)和单语双言儿童(同时会使用汉语普通话和泰州方言)为被试,通过线索-切换任务和语音Stroop任务就上述问题进行了研究。结果显示:(1)在线索-切换任务上,两组儿童作业表现差异不显著;(2)在语音Stroop任务上,单语双言儿童具有显著的认知优势。据此,本研究认为双语认知优势的获得是个体对诸如句法、语义、正字法、语音、词素等多种语言亚成分协同控制的结果。  相似文献   

13.
The development of phonetic codes in memory of 141 pairs of normal and disabled readers from 7.8 to 16.8 years of age was tested with a task adapted from L. S. Mark, D. Shankweiler, I. Y. Liberman, and C. A. Fowler (Memory & Cognition, 1977, 5, 623–629) that measured false-positive errors in recognition memory for foil words which rhymed with words in the memory list versus foil words that did not rhyme. Our younger subjects replicated Mark et al., showing a larger difference between rhyming and nonrhyming false-positive errors for the normal readers. The older disabled readers' phonetic effect was comparable to that of the younger normal readers, suggesting a developmental lag in their use of phonetic coding in memory. Surprisingly, the normal readers' phonetic effect declined with age in the recognition task, but they maintained a significant advantage across age in the auditory WISC-R digit span recall test, and a test of phonological nonword decoding. The normals' decline with age in rhyming confusion may be due to an increase in the precision of their phonetic codes.  相似文献   

14.
Recognition memory for consonants and vowels selected from within and between phonetic categories was examined in a delayed comparison discrimination task. Accuracy of discrimination for synthetic vowels selected from both within and between categories was inversely related to the magnitude of the comparison interval. In contrast, discrimination of synthetic stop consonants remained relatively stable both within and between categories. The results indicate that differences in discrimination between consonants and vowels are primarily due to the differential availability of auditory short-term memory for the acoustic cues distinguishing these two classes of speech sounds. The findings provide evidence for distinct auditory and phonetic memory codes in speech perception.  相似文献   

15.
Pomplun M  Reingold EM  Shen J 《Cognition》2001,81(2):B57-B67
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.  相似文献   

16.
Dichotic CV syllables (identical and nonidentical pairs) were presented at nine temporal offsets between 0 and 500 msec. One task consisted in judging quickly whether the syllables in a pair were phonetically the same or different; the other task was to identify both syllables. The fundamental frequency (pitch) of the synthetic stimuli was either the same or different, and either predictable or unpredictable. The pitch variable had surprisingly little effect on the latencies of "same"-"different" judgments, and the expected "preparation" effect of pitch predictability was ba]rely present. Instead, there were strong effects on the frequencies of errors at short temporal delays, which suggests shifts or biases in the phonetic "same"-"different" criterion with context. A comparison with analogous errors in the identification task revealed identical patterns. Further analysis of identification errors showed no overall "feature sharing advantage": The direction of this effect depends on the kind of error committed. Also, a lag effect was found only in nonidentical pairs that received two identical responses. The results are discussed in the framework of a two-stage information-processing model. Effects of pitch are tentatively explained as biases from implicit (pitch) decisions at the auditory level on phonetic decisions in the presence of uncertainty. Four sources of errors are identified: fusion at the auditory level; "integration," confusions, and transpositions at the phonetic level.  相似文献   

17.
Although infants have the ability to discriminate a variety of speech contrasts, young children cannot always use this ability in the service of spoken-word recognition. The research reported here asked whether the reason young children sometimes fail to discriminate minimal word pairs is that they are less efficient at word recognition than adults, or whether it is that they employ different lexical representations. In particular, the research evaluated the proposal that young children’s lexical representations are more “holistic” than those of adults, and are based on overall acoustic-phonetic properties, as opposed to phonetic segments. Three- and four-year-olds were exposed initially to an invariant target word and were subsequently asked to determine whether a series of auditory stimuli matched or did not match the target. The critical test stimuli were nonwords that varied in their degree of phonetic featural overlap with the target, as well as in terms of the position(s) within the stimuli at which they differed from the target, and whether they differed from the target on one or two segments. Data from four experiments demonstrated that the frequency with which children mistook a nonword stimulus for the target was influenced by extent of featural overlap, but not by word position. The data also showed that, contrary to the predictions of the holistic hypothesis, stimuli differing from the target by two features on a single segment were confused with the target more often than were stimuli differing by a single feature on each of two segments. This finding suggests that children use both phonetic features and segments in accessing their mental lexicons, and that they are therefore much more similar to adults than is suggested by the holistic hypothesis.  相似文献   

18.
Categories and context in the perception of isolated steady-state vowels   总被引:1,自引:0,他引:1  
The noncategorical perception of isolated vowels has been attributed to the availability of auditory memory in discrimination. In our first experiment, using vowels from an /i/-/I/epsilon) continuum in a same-different (AX) task and comparing the results with predictions derived from a separate identification test, we demonstrated that vowels are perceived more nearly categorically if auditory memory is degraded by extending the interstimulus interval and/or filling it with irrelevant vowel sounds. In a second experiment, we used a similar paradigm, but in addition to presenting a separate identification test, we elicited labeling responses to the AX pairs used in the discrimination task. We found that AX labeling responses predicted discrimination performance quite well, regardless of whether auditory memory was available, whereas the predictions from the separate identification test were more poorly matched by the obtained data. The AX labeling reponses showed large contrast effects (both proactive and retroactive) that were greatly reduced when auditory memory was interfered with. We conclude from the presence of these contrast effects that vowels are not perceived categorically (that is, absolutely). However, it seems that by taking the effects of context into account properly, discrimination performance can be quite accurately predicted from labeling data, suggesting that vowel discrimination, like consonant discrimination, may be mediated by phonetic labels.  相似文献   

19.
Infants aged 4.5 months are able to match phonetic information in the face and voice ( Kuhl & Meltzoff, 1982 ; Patterson & Werker, 1999 ); however, the ontogeny of this remarkable ability is not understood. In the present study, we address this question by testing substantially younger infants at 2 months of age. Like the 4.5‐month‐olds in past studies, the 2‐month‐old infants tested in the current study showed evidence of matching vowel information in face and voice. The effect was observed in overall looking time, number of infants who looked longer at the match, and longest look to the match versus mismatch. Furthermore, there were no differences based on male or female stimuli and no preferences for the match when it was on the right or left side. These results show that there is robust evidence for phonetic matching at a much younger age than previously known and support arguments for either some kind of privileged processing or particularly rapid learning of phonetic information.  相似文献   

20.
声旁部分信息在儿童学习和记忆汉字中的作用   总被引:6,自引:1,他引:5  
采用类似课堂教学的学习-测验任务,通过3个实验探讨了声旁提供的部分读音信息在儿童学习和记忆汉字中的作用。被试来自北京市两所小学260名四年级学生。实验中要求学生学习和记忆3种生字:(1)声旁提供汉字读音的全部信息,如规则一致字;(2)声旁提供汉字读音的部分信息,如声调不同字或声母不同字;(3)声旁不提供汉字读音的信息,如声旁不知字。要求学生学习所有的字一遍后回忆生字的读音,共学习3遍。实验发现,儿童学习和记忆汉字的正确率随声旁提供的整字读音信息不同而不同:当声旁提供全部信息时正确率最高,当声旁提供部分信息时其次,当声旁没有提供信息时最低;而且声旁提供的部分信息越多,正确率越高。结果表明,儿童对声旁提供的部分读音信息敏感,发展声旁意识对学习和记忆汉字有积极作用  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号