首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.  相似文献   

3.
For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as “OCtopus” (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors (“okTOber”) before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than noninitially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.  相似文献   

4.
5.
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.  相似文献   

6.
In this paper we examine whether the recognition of a spoken noun is affected by the gender marking—masculine or feminine—that is carried by a preceding word. In the first of two experiments, the gating paradigm was used to study the access of French nouns that were preceded by an appropriate gender marking, carried by an article, or preceded by no gender marking. In the second experiment, subjects were asked to make a lexical decision on the same material. A very strong facilitatory effect was found in both cases. The origin of the gender-marking effect is discussed, as well as the level of processing involved—lexical or syntactic.  相似文献   

7.
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.  相似文献   

8.
The role of grammatical gender for auditory word recognition in German was investigated in three experiments and two sets of corpus analyses. In the corpus analyses, gender information reduced the lexical search space as well as the amount of input needed to uniquely identify a word. To test whether this holds for on-line processing, two auditory lexical decision experiments (Experiments 1 and 3) were conducted using valid, invalid, or noise-masked articles as primes. Clear gender-priming effects were obtained in both experiments. Experiment 2 used phoneme monitoring with words and with pseudowords deviating from base words in one or more phonological features. Contrary to the lexical decision latencies, phoneme-monitoring latencies showed no influence of gender but did show similarity mismatch effects. We argue that gender information is not utilized early during word recognition. Rather, the presence of a valid article increases the initial familiarity of a word, facilitating subsequent responses.  相似文献   

9.
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models.  相似文献   

10.
《Cognition》1986,22(3):259-282
Predictions derived from the Cohort Model of spoken word recognition were tested in four experiments using an auditory lexical decision task. The first experiment produced results that were compatible with the model, in that the point at which a word could be uniquely identified appeared to influence reaction times. The second and third experiments demonstrated that the processing of a nonword phoneme string continues after the point at which there are no possible continuations that would make a word. The number of phonemes following the point of deviation from a word was shown to affect reaction times, as well as the similarity of the nonword to a word. The final experiment demonstrated a frequency effect when high and low frequency words were matched on their point of unique identity. These last three results are not consistent with the Cohort Model and so an alternative account is put forward. According to this account, the first few phonemes are used to activate all words beginning with those phonemes and then these candidates are checked back to the original stimulus. This model provides greater flexibility than the Cohort Model and allows for mispronounced and misperceived words to be correctly recognized.  相似文献   

11.
The nature of phoneme representation in spoken word recognition   总被引:1,自引:0,他引:1  
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus onset asynchronies. In Experiment 1, Task 2 difficulty was manipulated using words containing matching or mismatching coarticulatory cues to the final consonant. The results showed that difficulty and onset asynchrony combined in an underadditive way, suggesting that the phonemic mismatch was resolved prior to a central decisional bottleneck. Similar results were found in Experiment 2 using nonwords. In Experiment 3, the manipulation of task difficulty involved lexical status, which once again revealed an underadditive pattern of response times. Finally, Experiment 4 compared this prebottleneck variable with a decisional variable: response key bias. The latter showed an additive pattern of responses. The experiments show that resolution of phonemic ambiguity can take advantage of cognitive slack time at short asynchronies, indicating that phonemic integration takes place at a relatively early stage of spoken word recognition.  相似文献   

12.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway.  相似文献   

13.
Models of spoken word recognition vary in the ways in which they capture the relationship between speech input and meaning. Modular accounts prohibit a word’s meaning from affecting the computation of its form-based representation, whereas interactive models allow activation at the semantic level to affect phonological processing. We tested these competing hypotheses by manipulating word familiarity and imageability, using lexical decision and repetition tasks. Responses to high-imageability words were significantly faster than those to low-imageability words. Repetition latencies were also analyzed as a function of cohort variables, revealing a significant imageability effect only for words that were members of large cohorts, suggesting that when the mapping from phonology to semantics is difficult, semantic information can help the discrimination process. Thus, these data support interactive models of spoken word recognition.  相似文献   

14.
Phonological priming in spoken word recognition: Task effects   总被引:3,自引:0,他引:3  
In two experiments, we examined the role of phonological relatedness between spoken items using both the lexical decision task and the shadowing task. In Experiment 1, words were used as primes and overlaps of zero (control), one, two, or all four or five (repetition) phonemes were compared. Except for the repetition conditions, in which facilitation was found, phonological overlap resulted in interference on word responses. These effects occurred in both tasks but were larger in lexical decision than in shadowing. The effects that were evident in shadowing can be attributed to an attentional mechanism linked to the subjects' expectancies of repetitions. The extra effects obtained in lexical decision can be interpreted by taking into account both activation of the response corresponding to the prime's lexical status and postlexical processes that check for phonological congruency between prime and target. In Experiment 2, some modifications were introduced to prevent the involvement of strategic factors, and pseudowords were used as primes. No effect at all was observed in shadowing, whereas in lexical decision interference effects occurred, which is consistent with the hypothesis that lexical decision may be negatively affected by finding a phonological discrepancy at the same time as the primed response is reactivated. Neither experiment provided evidence for the occurrence of phonological priming in the perceptual processing of words.  相似文献   

15.
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.  相似文献   

16.
When perceiving spoken language, listeners must match the incoming acoustic phonetic input to lexical representations in memory. Models that quantify this process propose that the input activates multiple lexical representations in parallel and that these activated representations compete for recognition (Weber & Scharenborg, 2012). In two experiments, we assessed how grammatically constraining contexts alter the process of lexical competition. The results suggest that grammatical context constrains the lexical candidates that are activated to grammatically appropriate competitors. Stimulus words with little competition from items of the same grammatical class benefit more from the addition of grammatical context than do words with more within-class competition. The results provide evidence that top-down contextual information is integrated in the early stages of word recognition. We propose adding a grammatical class level of analysis to existing models of word recognition to account for these findings.  相似文献   

17.
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.  相似文献   

18.
In this study, we examined the influence of various sources of constraint on spoken word recognition in a mispronunciation-detection task. Five- and 8-year-olds and adults were presented with words (intact or with word-initial or noninitial errors) from three different age-of-acquisition categories. "Intact" and "mispronounced" responses were collected for isolated words with or without a picture referent (Experiment 1) and for words in constraining or unconstraining sentences (Experiment 2). Some evidence for differential attention to word-initial as opposed to noninitial acoustic-phonetic information (and thus the influence of sequential lexical constraints on recognition) was apparent in young children's and adults' response criteria and in older children's and adults' reaction times. A more marked finding, however, was the variation in subjects' performance, according to several measures, with age and lexical familiarity (defined according to adults' subjective age-of-acquisition estimates). Children's strategies for responding to familiar and unfamiliar words in different contexts are discussed.  相似文献   

19.
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.  相似文献   

20.
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号