首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Words varying in length (one, two, and three syllables) and in frequency (high and low) were presented to subjects in isolation, in a short context, and in a long context. Each word was presented repeatedly, and its presentation time (duration from the onset of the word) increased at each successive pass. After each pass, subjects were asked to write down the word being presented and to indicate how confident they were about each guess. In addition to replicating a frequency, a context, and a word-length effect, this “gating” paradigm allowed us to study more closely the narrowing-in process employed by listeners in the isolation and recognition of words: Some delay appears to exist between the moment a word is isolated from other word candidates and the moment it is recognized; word candidates differ in number and in type from one context to the other; and, like syntactic processing, word recognition is strewn with garden paths. The active direct access model proposed by Marslen-Wilson and Welsh is discussed in light of these findings.  相似文献   

2.
Onset gating was used to investigate the effects of stress typicality during the processing of disyllabic nouns and verbs by 34 native and 36 nonnative speakers of English. We utilized 50-msec increments and included two conditions. In the silenced condition, only word onsets were presented (the participants had no information about the duration or stress pattern of the entire word). In the filtered condition, word onsets were presented with a low-pass filtered version of the remainder of the word (this type of filtering provides duration and stress information in the absence of phonemic information). The results demonstrated significant effects of stress typicality in both groups of speakers. Typically stressed trochaic nouns and iambic verbs exhibited advantaged processing, as compared with atypically stressed iambic nouns and trochaic verbs. There was no significant effect of presentation condition (silenced or filtered). The results are discussed in light of recent research in which the effects of lexical stress during spoken word recognition have been investigated.  相似文献   

3.
《Cognitive development》1988,3(2):137-165
The nature of the stimulus information that is important for the recognition of auditorily presented words by young (5-year-old) children and adults was studied. In Experiment 1, subjects identified and rated the extent of noise disruption for words in which white noise either was added to or replaced phoneme (fricative and nonfricative) segments in word-initial, -medial or -final position. In Experiment 2, subjects identified words as acoustic-phonetic information accumulated either from their beginnings or ends with silence or envelope-shaped noise replacing the nonpresented parts. The results point to developmental similarities in the derivation of phoneme identities from impoverished sensory input to support the component processes of recognition. However, position-specific information may play a less prominent role in recognition for children than for adults.  相似文献   

4.
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.  相似文献   

5.
6.
In this paper we examine whether the recognition of a spoken noun is affected by the gender marking—masculine or feminine—that is carried by a preceding word. In the first of two experiments, the gating paradigm was used to study the access of French nouns that were preceded by an appropriate gender marking, carried by an article, or preceded by no gender marking. In the second experiment, subjects were asked to make a lexical decision on the same material. A very strong facilitatory effect was found in both cases. The origin of the gender-marking effect is discussed, as well as the level of processing involved—lexical or syntactic.  相似文献   

7.
In this study, we examined the influence of various sources of constraint on spoken word recognition in a mispronunciation-detection task. Five- and 8-year-olds and adults were presented with words (intact or with word-initial or noninitial errors) from three different age-of-acquisition categories. "Intact" and "mispronounced" responses were collected for isolated words with or without a picture referent (Experiment 1) and for words in constraining or unconstraining sentences (Experiment 2). Some evidence for differential attention to word-initial as opposed to noninitial acoustic-phonetic information (and thus the influence of sequential lexical constraints on recognition) was apparent in young children's and adults' response criteria and in older children's and adults' reaction times. A more marked finding, however, was the variation in subjects' performance, according to several measures, with age and lexical familiarity (defined according to adults' subjective age-of-acquisition estimates). Children's strategies for responding to familiar and unfamiliar words in different contexts are discussed.  相似文献   

8.
9.
Connine, Blasko, and Hall (Journal of Memory and Language 30:234–250, 1991) suggested that within a 1-second temporal window, subsequent biasing information can influence the identification of a previously spoken word. Four experiments further explored this hypothesis. Our participants heard sentences in which an ambiguous target word was followed less than or more than a second later by a word biased in favor of either the target word or another word. Overall, the effects of the contextual biases on responding, measured using phonemic restoration and phoneme identification, were almost as large after 1 second as before 1 second. The implications of these results for defining the window of contextual effects are discussed.  相似文献   

10.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway.  相似文献   

11.
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.  相似文献   

12.
The nature of phoneme representation in spoken word recognition   总被引:1,自引:0,他引:1  
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus onset asynchronies. In Experiment 1, Task 2 difficulty was manipulated using words containing matching or mismatching coarticulatory cues to the final consonant. The results showed that difficulty and onset asynchrony combined in an underadditive way, suggesting that the phonemic mismatch was resolved prior to a central decisional bottleneck. Similar results were found in Experiment 2 using nonwords. In Experiment 3, the manipulation of task difficulty involved lexical status, which once again revealed an underadditive pattern of response times. Finally, Experiment 4 compared this prebottleneck variable with a decisional variable: response key bias. The latter showed an additive pattern of responses. The experiments show that resolution of phonemic ambiguity can take advantage of cognitive slack time at short asynchronies, indicating that phonemic integration takes place at a relatively early stage of spoken word recognition.  相似文献   

13.
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.  相似文献   

14.
Models of spoken word recognition vary in the ways in which they capture the relationship between speech input and meaning. Modular accounts prohibit a word’s meaning from affecting the computation of its form-based representation, whereas interactive models allow activation at the semantic level to affect phonological processing. We tested these competing hypotheses by manipulating word familiarity and imageability, using lexical decision and repetition tasks. Responses to high-imageability words were significantly faster than those to low-imageability words. Repetition latencies were also analyzed as a function of cohort variables, revealing a significant imageability effect only for words that were members of large cohorts, suggesting that when the mapping from phonology to semantics is difficult, semantic information can help the discrimination process. Thus, these data support interactive models of spoken word recognition.  相似文献   

15.
16.
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.  相似文献   

17.
Phonological priming in spoken word recognition: Task effects   总被引:3,自引:0,他引:3  
In two experiments, we examined the role of phonological relatedness between spoken items using both the lexical decision task and the shadowing task. In Experiment 1, words were used as primes and overlaps of zero (control), one, two, or all four or five (repetition) phonemes were compared. Except for the repetition conditions, in which facilitation was found, phonological overlap resulted in interference on word responses. These effects occurred in both tasks but were larger in lexical decision than in shadowing. The effects that were evident in shadowing can be attributed to an attentional mechanism linked to the subjects' expectancies of repetitions. The extra effects obtained in lexical decision can be interpreted by taking into account both activation of the response corresponding to the prime's lexical status and postlexical processes that check for phonological congruency between prime and target. In Experiment 2, some modifications were introduced to prevent the involvement of strategic factors, and pseudowords were used as primes. No effect at all was observed in shadowing, whereas in lexical decision interference effects occurred, which is consistent with the hypothesis that lexical decision may be negatively affected by finding a phonological discrepancy at the same time as the primed response is reactivated. Neither experiment provided evidence for the occurrence of phonological priming in the perceptual processing of words.  相似文献   

18.
It has been known for some time that the recognition of a noun is affected by the gender marking, such as masculine or feminine, that is carried by a preceding word. In this study, we used auditory naming to examine how early and late English-French bilinguals react to gender marking when processing French. The early bilinguals showed clear facilitation and inhibition effects, but the late bilinguals were totally insensitive to gender marking, whether congruent or incongruent. The results are discussed in terms of current accounts of gender processing as well as age of acquisition and regular use of the gender-marking language.  相似文献   

19.
In three experiments, the processing of words that had the same overall number of neighbors but varied in the spread of the neighborhood (i.e., the number of individual phonemes that could be changed to form real words) was examined. In an auditory lexical decision task, a naming task, and a same-different task, words in which changes at only two phoneme positions formed neighbors were responded to more quickly than words in which changes at all three phoneme positions formed neighbors. Additional analyses ruled out an account based on the computationally derived uniqueness points of the words. Although previous studies (e.g., Luce & Pisoni, 1998) have shown that the number of phonological neighbors influences spoken word recognition, the present results show that the nature of the relationship of the neighbors to the target word--as measured by the spread of the neighborhood--also influences spoken word recognition. The implications of this result for models of spoken word recognition are discussed.  相似文献   

20.
《Cognition》1986,22(3):259-282
Predictions derived from the Cohort Model of spoken word recognition were tested in four experiments using an auditory lexical decision task. The first experiment produced results that were compatible with the model, in that the point at which a word could be uniquely identified appeared to influence reaction times. The second and third experiments demonstrated that the processing of a nonword phoneme string continues after the point at which there are no possible continuations that would make a word. The number of phonemes following the point of deviation from a word was shown to affect reaction times, as well as the similarity of the nonword to a word. The final experiment demonstrated a frequency effect when high and low frequency words were matched on their point of unique identity. These last three results are not consistent with the Cohort Model and so an alternative account is put forward. According to this account, the first few phonemes are used to activate all words beginning with those phonemes and then these candidates are checked back to the original stimulus. This model provides greater flexibility than the Cohort Model and allows for mispronounced and misperceived words to be correctly recognized.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号