首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Summary Three experiments are reported that extend previous observations on the slowing of lexical decisions by phonologically ambiguous forms of Serbo-Croatian words relative to the phonologically unambiguous forms of the same words. The phonological ambiguity arises from the presence of letters whose phoneme interpretation differs between the two Serbo-Croatian alphabets, the Roman and the Cyrillic. In the first experiment, target words were preceded by asterisks or by context words that were associatively related and alphabetically matched to the targets. The effect of word contexts (i.e., priming) was greater for phonologically ambiguous than for phonologically unambiguous targets. The second experiment manipulated the alphabetic match of context word and target word. The effect of this manipulation was limited to phonologically ambiguous words. The third experiment reproduced the details of the first with the addition of visual degradation of the target stimuli on half of the trials. The results of the first experiment were replicated but no interaction between context and visual degradation was observed. The discussion focused on phonologically mediated access of Serbo-Croatian words. A model was proposed in which phonological codes are assembled prelexically according to weighted grapheme-phoneme correspondence rules.  相似文献   

3.
R Frost 《Cognition》1991,39(3):195-214
When an amplitude-modulated noise generated from a spoken word is presented simultaneously with the word's printed version, the noise sounds more speechlike. This auditory illusion obtained by Frost, Repp, and Katz (1988) suggests that subjects detect correspondences between speech amplitude envelopes and printed stimuli. The present study investigated whether the speech envelope is assembled from the printed word or whether it is lexically addressed. In two experiments subjects were presented with speech-plus-noise and with noise-only trials, and were required to detect the speech in the noise. The auditory stimuli were accompanied with matching or non-matching Hebrew print, which was unvoweled in Experiment 1 and voweled in Experiment 2. The stimuli of both experiments consisted of high-frequency words, low-frequency words, and non-words. The results demonstrated that matching print caused a strong bias to detect speech in the noise when the stimuli were either high- or low-frequency words, whereas no bias was found for non-words. The bias effect for words or non-words was not affected by spelling to sound regularity; that is, similar effects were obtained in the voweled and the unvoweled conditions. These results suggest that the amplitude envelope of the word is not assembled from the print. Rather, it is addressed directly from the printed word and retrieved from the mental lexicon. Since amplitude envelopes are contingent on detailed phonetic structures, this outcome suggests that representations of words in the mental lexicon are not only phonological but also phonetic in character.  相似文献   

4.
In two experiments, participants' eye movements were monitored as they read sentences containing biased syntactic category ambiguous words with either distinct (e.g., duck) or related (e.g., burn) meanings or unambiguous control words. In Experiment 1, prior context was consistent with either the dominant or subordinate interpretation of the ambiguous word. The subordinate bias effect was absent for the ambiguous words in gaze duration measures. However, effects of ambiguity did emerge in other measures for the ambiguous words preceded by context supporting the subordinate interpretation. In Experiment 2, context preceding the target words was neutral. Ambiguity effects only arose when posttarget context was consistent with the subordinate interpretation of the ambiguous words, indicating that readers initially selected the dominant interpretation. Results support immediate theories of syntactic category ambiguity resolution, but also suggest that recovery from misanalysis of syntactic category ambiguity is more difficult than for lexical-semantic ambiguity in which alternate interpretations do not cross syntactic category.  相似文献   

5.
Readers' eye movements were monitored as they read sentences containing lexically ambiguous words whose meanings share a single syntactic category (e.g., calf), lexically ambiguous words whose meanings belong to different syntactic categories (e.g., duck), or unambiguous control words. Information provided prior to the target always unambiguously specified the context-appropriate syntactic-category assignment for the target. Fixation times were longer on ambiguous words whose meanings share a single syntactic category than on controls, both when prior context was semantically consistent with the subordinate interpretation of a biased ambiguous word (Experiment 1) and when prior context was semantically neutral as to the intended interpretation of a balanced ambiguous word (Experiment 2). These ambiguity effects, which resulted from differences in difficulty with meaning resolution, were not found when the ambiguity crossed syntactic categories. These data indicate that, in the absence of syntactic ambiguity, syntactic-category information mediates the semantic-resolution process.  相似文献   

6.
We investigated the effect of semantic and phonemic ambiguity on lexical decision and naming performance in the deep Hebrew orthography. Experiment 1 revealed that lexical decisions for ambiguous consonant strings are faster than those for any of the high- or low-frequency voweled alternative meanings of the same strings. These results suggested that lexical decisions for phonemically and semantically ambiguous Hebrew consonant strings are based on the ambiguous orthographic information. However, a significant frequency effect for both ambiguous and unambiguous words suggested that if vowels are present, subjects do not ignore them completely while making lexical decisions. Experiment 2 revealed that naming low-frequency voweled alternatives of ambiguous strings took significantly longer than naming the high-frequency alternatives or the unvoweled strings without a significant difference between the latter two string types. Voweled and unvoweled unambiguous strings, however, were named equally fast. We propose that semantic and phonological disambiguation of unvoweled words in Hebrew is achieved in parallel to the lexical decision, but is not required by it. Naming Hebrew words usually requires a readout of phonemic information from the lexicon.  相似文献   

7.
A discrete-trials color naming (Stroop)’paradigm was used to examine activation along orthographic and phonological dimensions in visual and auditory word recognition. Subjects were presented a prime word, either auditorily or visually, followed 200 msec later by a target word printed in a color. The orthographic and phonological similarity of prime-target pairs varied. Color naming latencies were longer when the primes and targets were orthographically and/or phonologically similar than when they were unrelated. This result obtained for both prime presentation modes. The results suggest that word recognition entails activation of multiple codes and priming of orthographically and phonologically similar words.  相似文献   

8.
We studied the influence of word frequency and orthographic depth on the interaction of orthographic and phonetic information in word perception. Native speakers of English and Serbo-Croatian were presented with simultaneous printed and spoken verbal stimuli and had to decide whether they were equivalent. Decision reaction time was measured in three experimental conditions: Clear print and clear speech, degraded print and clear speech, and clear print and degraded speech. Within each language, the effects of visual and auditory degradation were measured, relative to the undegraded presentation. Both effects of degradation were much stronger in English than in Serbo-Croatian. Moreover, they were the same for high- and low-frequency words in both languages. These results can be accounted for by a parallel interactive processing model that assumes lateral connections between the orthographic and phonological systems at all of their levels. The structure of these lateral connections is independent of word frequency and is determined by the relationship between spelling and phonology in the language: simple isomorphic connections between graphemes and phonemes in Serbo-Croatian, but more complex, many-to-one, connections in English.  相似文献   

9.
汉语双字多义词的识别优势效应   总被引:4,自引:0,他引:4  
采用词汇判断法、命名法考察汉语双字多义词的识别优势效应。结果发现,在词汇判断任务中存在着多义词较单义词的识别优势,但这种识别优势只表现在低频词中。在命名任务中未发现多义词的识别优势。作者根据分布表征模型的观点对双字多义词的识别优势效应做出了可能的解释。  相似文献   

10.

The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.

  相似文献   

11.
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjut?r] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjut?r] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.  相似文献   

12.
The interpretation of emotionally ambiguous words, sentences, or scenarios can be altered through training procedures that are collectively called cognitive bias modification for interpretation (CBM-I). In three experiments, we systematically manipulated the nature of the training in order to discriminate between emotional priming and ambiguity resolution accounts of training effects. In Experiment 1 participants completed word fragments that were consistently related to either a negative or benign interpretation of an ambiguous sentence. In a subsequent semantic priming task they demonstrated an interpretation bias, in that they were faster to identify relatedness of targets that were associated with the training-congruent meaning of an emotionally ambiguous homograph. We then manipulated the training sentences to show that interpretation bias was eliminated when participants simply completed valenced word fragments following unrelated sentences (Experiment 2), or completed fragments that were related to emotional but unambiguous sentences (Experiment 3). Only when participants were required to actively resolve emotionally ambiguous sentences during training did changes in interpretation emerge at test. Findings suggest that CBM-I achieves its effects by altering a production rule that aids the selection of meaning from emotionally ambiguous alternatives, in line with an ambiguity resolution account.  相似文献   

13.
In two experiments the allocation of attention during the recognition of ambiguous and unambiguous words was investigated. In Experiment 1, separate groups performed either lexical decision, auditory probe detection, or their combination. In the combined condition probes occurred 90, 180, or 270 ms following the onset of the lexical-decision target. Lexical decisions and probe responses were fastest for ambiguous words, followed by unambiguous words and pseudowords, respectively, which indicated that processing ambiguous words was less attention demanding than unambiguous words or pseudowords. Attention demands decreased across the timecourse of word recognition for all stimulus types. In Experiment 2, one group performed the lexical-decision task alone, whereas another group performed the lexical-decision task during the retention interval of a short-term memory task. The results were consistent with those from Experiment 1 and showed that word recognition is an attention-demanding process and that the demands are inversely related to the number of meanings of the stimulus. These results are discussed with regard to the structure of the mental lexicon (i.e., single vs. multiple lexical entries) and the effect of such a structure on attentional mechanisms.  相似文献   

14.
Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input (pin) activates lexical entries with underspecified coronal stops (tin), but lexical entries with specified labial stops (pin) are not activated by mismatching input (tin). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than did unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs (tin-pin) and in Experiments 2 and 3 with words with an onset overlap (peacock-teacake). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as was predicted by an optimal perception account.  相似文献   

15.
Snoeren ND  Seguí J  Hallé PA 《Cognition》2008,108(2):512-521
The present study investigated whether lexical access is affected by a regular phonological variation in connected speech: voice assimilation in French. Two associative priming experiments were conducted to determine whether strongly assimilated, potentially ambiguous word forms activate the conceptual representation of the underlying word. Would the ambiguous word form [sud] (either assimilated soute 'hold' or soude 'soda') facilitate "bagage" 'luggage', which is semantically related to soute but not to soude? In Experiment 1, words in either canonical or strongly assimilated form were presented as primes. Both forms primed their related target to the same extent. Potential lexical ambiguity did not modulate priming effects. In Experiment 2, the primes such as assimilated soute pronounced [sud] used in Experiment 1 were replaced with primes such as soude canonically pronounced [sud]. No semantic priming effect was obtained with these primes. Therefore, the effect observed for assimilated forms in Experiment 1 cannot be due to overall phonological proximity between canonical and assimilated forms. We propose that listeners must recover the intended words behind the assimilated forms through the exploitation of the remaining traces of the underlying form, however subtle these traces may be.  相似文献   

16.
Speakers only sometimes include the that in sentence complement structures like The coach knew (that) you missed practice. Six experiments tested the predictions concerning optional word mention of two general approaches to language production. One approach claims that language production processes choose syntactic structures that ease the task of creating sentences, so that words are spoken opportunistically, as they are selected for production. The second approach claims that a syntactic structure is chosen that is easiest to comprehend, so that optional words like that are used to avoid temporarily ambiguous, difficult-to-comprehend sentences. In all experiments, speakers did not consistently include optional words to circumvent a temporary ambiguity, but they did omit optional words (the complementizer that) when subsequent material was either repeated (within a sentence) or prompted with a recall cue. The results suggest that speakers choose syntactic structures to permit early mention of available material and not to circumvent disruptive temporary ambiguities.  相似文献   

17.
Recognition of a spoken word phonological variant--schwa vowel deletion (e.g., corporate --> corp'rate)--was investigated in vowel detection (absent/present) and syllable number judgment (two or three syllables) tasks. Variant frequency corpus analyses (Patterson, LoCasto, & Connine, 2003) were used to select words with either high or low schwa vowel deletion rates. Speech continua were created for each word in which schwa vowel length was manipulated (unambiguous schwa-present and schwa-absent endpoints, along with intermediate ambiguous tokens). Matched control nonwords were created with identical schwa vowel continua and surrounding segments. The low-deletion-rate words showed more three-syllable judgments than did the high-deletion-rate words. Matched control nonwords did not differ as a function of deletion rate. Experiments 2 and 3 showed a lexical decision reaction time advantage for more frequent surface forms, as compared with infrequent ones, for schwa-deleted (Experiment 2) and schwa-present (Experiment 3) stimuli. The results are discussed in terms of representations of variant forms of words based on variant frequency.  相似文献   

18.
Lexical access for low- and high-frequency words in Hebrew   总被引:1,自引:0,他引:1  
The hypothesis that phonological mediation is involved to a greater extent in the recognition of low- than in the recognition of high-frequency words was examined using Hebrew. Hebrew has two forms of spelling, pointed and unpointed, which differ greatly in the extent of phonological ambiguity, with the unpointed spelling lacking almost all vowel information. A lexical decision task was employed using target words that had only one pronunciation whether pointed or unpointed. Targets were either pointed or unpointed and were preceded by a prime, which for word targets, was either semantically related or unrelated. The results indicated the following: First, the advantage of pointed over unpointed spelling was larger for low-frequency than for high-frequency words, suggesting a stronger phonological mediation for low-frequency words. Second, the size of the pointing effect was independent of word length, suggesting that phonology is obtained on the basis of the printed word as a whole, by looking it up in a phonological lexicon. Third, response latency to nonwords was not affected by the presence or absence of pointing, suggesting that failure to locate the entry corresponding to a letter string in a phonological lexicon results in a NO decision. Fourth, presence of a related prime was not found to compensate for absence of pointing, suggesting that the activation of a word’s representation in the semantic lexicon does not aid access to its corresponding entry in the phonological lexicon.  相似文献   

19.
Three experiments examined the contribution of phonological availability in selecting words as predicted by interactive activation models of word production. Homophonous words such as week and weak permitted a word's phonological form to be activated on priming trials without selection of its meaning or lemma. Recent production of a homophone failed to significantly increase production of its twin as a sentence completion. However, speakers were significantly more likely to complete a sentence with a recently read or generated unambiguous word. This increase in response probability was unaffected by word frequency. The results constrain the degree to which experience and phonological availability may affect word selection in spoken language production.  相似文献   

20.
Two experiments were performed in an attempt to evaluate explanations of repetition priming-the facilitation observed when the same word is processed a second time in the same task. One task employed was lexical decision (word/nonword) and the other was ambiguity decision (ambiguous/ unambiguous). In the first experiment, transfer on a lexical decision task was measured following either a lexical decision or an ambiguity decision. When the identical lists were processed in the first phase for lexical and ambiguity decision, equal repetition effects were obtained on lexical decision. However, when the ambiguity task was presented without nonwords, no repetition priming occurred. In a second experiment, the within-task repetition effect was large for the ambiguity decision, whereas no transfer was obtained from lexical decision to ambiguity decision. The results were interpreted as being consistent with a transfer-appropriate processing account of repetition priming.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号