首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonstrategic subjective threshold effects in phonemic masking   总被引:1,自引:0,他引:1  
Three backward-masking experiments demonstrated that the magnitude of the phonemic mask reduction effect (MRE) is a function of subjective threshold and that the magnitude is also independent of stimulus-based response strategies. In all three experiments, a target word (e.g., bake) was backward masked by a graphemically similar nonword (e.g., BAWK), a phonemically similar nonword (e.g., BAIK), or an unrelated control (e.g., CRUG). Experiments 1 and 2 had a low percentage (9%) of trials with phonemic masks and differed only in baseline identification rate. Experiment 3 controlled baseline identification rate at below and above subjective threshold levels, with 9% phonemic trials. The results were that identification rates were higher with phonemic masks than with graphemic masks, irrespective of the low percentage of phonemic trials. However, the magnitude of the phonemic MRE became large only when the baseline identification rate was below subjective threshold. The pattern of the phonemic MRE was interpreted as a result of rapid automatic phonological activation, independent of stimulus-based processing strategies.  相似文献   

2.
Three experiments investigated the nature of the information required for the lexical access of visual words. A four-field masking procedure was used, in which the presentation of consecutive prime and target letter strings was preceded and followed by presentations of a pattern mask. This procedure prevented subjects from identifying, and thus intentionally using, prime information. Experiment I extablished the existence of a semantic priming effect on target identification, demonstrating the lexical access of primes under these conditions. It also showed a word repetition effect independent of letter case. Experiment II tested whether this repetition effect was due to the activation of graphemic or phonemic information. The graphemic and phonemic similarity of primes and targets was varied. No evidence for phonemic priming was found, although a graphemic priming effect, independent of the physical similarity of the stimuli, was obtained. Finally Experiment III demonstrated that, irrespective of whether the prime was a word or a nonword, graphemic priming was equally effective. In both Experiments II and III, however, the word repetition effect was stronger than the graphemic priming effect. It is argued that facilitation from graphemic priming was due to the prime activating a target representation coded for abstract (non-visual) graphemic features, such as letter identities. The extra facilitation from same identity priming was attributed to semantic as well as graphemic activation. The implications of these results for models of word recognition are discussed.  相似文献   

3.
We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /ɛ/ and /e/. Here, we tested whether adding visual information about the articulatory gestures (i.e., lip movements) could enhance this perceptual ability. We found that, for auditory-only presentations, Spanish-dominant bilinguals failed to show sensitivity to the /ɛ/–/e/ contrast, whereas Catalan-dominant bilinguals did. Yet, when the same speech events were presented audiovisually, Spanish-dominants (as well as Catalan-dominants) were sensitive to the phonemic contrast. Finally, when the stimuli were presented only visually (in the absence of sound), none of the two groups presented clear signs of discrimination. Our results suggest that visual speech gestures enhance second language perception at the level of phonological processing especially by way of multisensory integration.  相似文献   

4.
Perceptual discrimination between speech sounds belonging to different phoneme categories is better than that between sounds falling within the same category. This property, known as "categorical perception," is weaker in children affected by dyslexia. Categorical perception develops from the predispositions of newborns for discriminating all potential phoneme categories in the world's languages. Predispositions that are not relevant for phoneme perception in the ambient language are usually deactivated during early childhood. However, the current study shows that dyslexic children maintain a higher sensitivity to phonemic distinctions irrelevant in their linguistic environment. This suggests that dyslexic children use an allophonic mode of speech perception that, although without straightforward consequences for oral communication, has obvious implications for the acquisition of alphabetic writing. Allophonic perception specifically affects the mapping between graphemes and phonemes, contrary to other manifestations of dyslexia, and may be a core deficit.  相似文献   

5.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

6.
In this article we evaluate current models of language processing by testing speeded classification of stimuli comprising one linguistic and one nonlinguistic dimension. Garner interference obtains if subjects are slower to classify attributes on one dimension when an irrelevant dimension is varied orthogonally than when the irrelevant dimension is held constant. With certain linguistic-nonlinguistic pairings (e.g., Experiment 1: the words high and low spoken either loudly or softly), significant Garner interference obtained when either dimension was classified; this indicated two-directional crosstalk. With other pairings (e.g., Experiment 3: spoken vowels and loudness), only the nonlinguistic dimension (e.g., loudness) displayed interference, suggesting unidirectional crosstalk downstream from a phonemic/graphemic level of analysis. Collectively, these results indicate the interaction can occur either within or across levels of information processing, being directed toward either more advanced or more primitive processes. Although poorly explained by all current models of language processing, our results are strikingly inconsistent with models that posit autonomy among levels of processing.  相似文献   

7.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

8.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

9.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners’ perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener’s orientation to speech stimuli.  相似文献   

10.
As a parallel to the dual decoding concept for processing of written language we proposed that phonological encoding does not necessarily occur in writing and that the phonemic and graphemic subsystems can be independent on the one-word level. This hypothesis was tested by comparing oral and written performance in a picture-naming task in Broca's and Wernicke's aphasics. In addition, the residual tacit knowledge of the orthographic properties of the names of the pictures was examined with a multiple-choice recognition task. The principal finding is that Broca's aphasics who were better in written than in oral naming showed more graphemically and semantically motivated errors than aphasics who were better in oral than in written naming, the latter producing more phonemically motivated errors. This result supports the dual encoding concept for writing on the singleword level, implying a direct route from the mental lexicon to the graphemic system in parallel with a route mediated by the phonemic system. Multiple-choice recognition was found to be superior to both oral and written performance in both Broca's and Wernicke's aphasics.  相似文献   

11.
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the speech errors that occur during the inner recitation of tongue-twister-like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. Although mouthing one’s inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and, consequently, the errors that are “heard” during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and the lexical bias effect—two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature-processing level and a lexical—phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect—just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.  相似文献   

12.
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.  相似文献   

13.

The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.

  相似文献   

14.
Previous research in speech perception has yielded two sets of findings which are brought together in the present study. First, it has been shown that normal hearing listeners use visible as well as acoustical information when processing speech. Second, it has been shown that there is an effect of specific language experience on speech perception such that adults often have difficulty identifying and discriminating non-native phones. The present investigation was designed to extend and combine these two sets of findings. Two studies were conducted using six consonant-vowel syllables (/ba/, /va/, /alpha a/, /da/, /3a/, and /ga/ five of which occur in French and English, and one (the interdental fricative /alpha a/) which occurs only in English. In Experiment 1, an effect of specific linguistic experience was evident for the auditory identification of the non-native interdental stimulus by French-speakers. In Experiment 2, it was shown that the effect of specific language experience extends to the perception of the visible information in speech. These findings are discussed in terms of their implications for our understanding of cross-language processes in speech perception and for our understanding of the development of bimodal speech perception.  相似文献   

15.
言语产生的语音加工单元具有跨语言的特异性。在印欧语言中, 音位是语音加工的重要功能单元。音位指具体语言中能够区别意义的最小语音单位, 如“big”包含三个音位/b/, /i/, /g/。目前, 在汉语言语产生中, 对音位的研究较少。本项目拟采用事件相关电位技术, 对汉语言语产生中的音位加工进行探讨, 试图考察:在汉语言语产生中, 1)音位加工的心理现实性, 以及音位表征是否受第二语言、汉语拼音习得、拼音使用经验的影响?2)音位的加工机制是怎样的?具体而言, 音位加工的特异性、位置编码、组合方式、时间进程是怎样的?对这些问题的回答, 将有助于深化对汉语言语产生的认识, 为建立汉语言语产生计算模型提供基础; 为比较印欧语言与汉语在机制上的异同提供基础; 为制定汉语语音教育教学方法提供心理学依据。  相似文献   

16.
Corley, Brocklehurst, and Moat (2011) recently demonstrated a phonemic similarity effect for phonological errors in inner speech, claiming that it contradicted Oppenheim and Dell's (2008) characterization of inner speech as lacking subphonemic detail (e.g., features). However, finding an effect in both inner and overt speech is not the same as finding equal effects in inner and overt speech. In this response, I demonstrate that Corley et al.'s data are entirely consistent with the notion that inner speech lacks subphonemic detail and that each of their experiments exhibits a Similarity × Articulation interaction of about the same size that Oppenheim and Dell (2008, 2010) reported in their work. I further show that the major discrepancy between the labs' data lies primarily in the magnitude of the main effect of phonemic similarity and the overall efficiency of error elicitation, and demonstrate that greater similarity effects are associated with lower error rates. This leads to the conclusion that successful speech error research requires finding a sweet spot between too much randomness and not enough data.  相似文献   

17.
Norris D  McQueen JM  Cutler A 《The Behavioral and brain sciences》2000,23(3):299-325; discussion 325-70
Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.  相似文献   

18.
Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception.  相似文献   

19.
Two talkers' productions of the same phoneme may be quite different acoustically, whereas their productions of different speech sounds may be virtually identical. Despite this lack of invariance in the relationship between the speech signal and linguistic categories, listeners experience phonetic constancy across a wide range of talkers, speaking styles, linguistic contexts, and acoustic environments. The authors present evidence that perceptual sensitivity to talker variability involves an active cognitive mechanism: Listeners expecting to hear 2 different talkers differing only slightly in average pitch showed performance costs typical of adjusting to talker variability, whereas listeners hearing the same materials but expecting a single talker or given no special instructions did not show these performance costs. The authors discuss the implications for understanding phonetic constancy despite variability between talkers (and other sources of variability) and for theories of speech perception. The results provide further evidence for active, controlled processing in real-time speech perception and are consistent with a model of talker normalization that involves contextual tuning.  相似文献   

20.
The three experiments reported in this study were each conducted in two phases. The first phase of Experiment 1 involved a same-different comparison task requiring “same” responses for both mixed-case (e.g., MAIN main) and pure-case (e.g., near near) pairs. This was followed by Phase 2, a surprise recognition test in which a graphemic effect on word retention was indicated by the superior recognition accuracy obtained for pure-case compared with mixed-case pairs. The first phases of Experiments 2 and 3 involved pronounceability and imageability judgment tasks, respectively. Graphemic retention was assessed by contrasting recognition accuracy for letter strings presented, during Phase 2, in their original Phase 1 case, with letter strings presented, during Phase 2, in. a graphemically dissimilar new case. The experiments provided evidence that there was minimal retention of the graphemic representations from which the phonemic representations of words are generated and, further, that the locus of this effect is probably postlexical. Nonwords were recognized more accurately than words in all three experiments. The latter result was attributed to differences between nonwords and words in both graphemic retention and semantic distinctiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号