首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.  相似文献   

2.
For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as “OCtopus” (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors (“okTOber”) before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than noninitially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.  相似文献   

3.
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.  相似文献   

4.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

5.
Emotional tone of voice (ETV) is essential for optimal verbal communication. Research has found that the impact of variation in nonlinguistic features of speech on spoken word recognition differs according to a time course. In the current study, we investigated whether intratalker variation in ETV follows the same time course in two long-term repetition priming experiments. We found that intratalker variability in ETVs affected reaction times to spoken words only when processing was relatively slow and difficult, not when processing was relatively fast and easy. These results provide evidence for the use of both abstract and episodic lexical representations for processing within-talker variability in ETV, depending on the time course of spoken word recognition.  相似文献   

6.
Ernestus M  Mak WM 《Brain and language》2004,90(1-3):378-392
This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.  相似文献   

7.
We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., carrot-parrot) and cohort (e.g., beaker-beetle) competitors. Broca’s aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee, Blumstein, & Sedivy, 2008) confirmed that Wernicke’s aphasic participants exhibited larger cohort competition effects. Individual-level analyses revealed a negative correlation between rhyme and cohort competition effect size across both groups of aphasic participants. Computational model simulations were performed to examine which of several accounts of lexical processing deficits in aphasia might account for the observed effects. Simulation results revealed that slower deactivation of lexical competitors could account for increased cohort competition in Wernicke’s aphasic participants; auditory perceptual impairment could account for increased rhyme competition in Broca’s aphasic participants; and a perturbation of a parameter controlling selection among competing alternatives could account for both patterns, as well as the correlation between the effects. In light of these simulation results, we discuss theoretical accounts that have the potential to explain the dynamics of spoken word recognition in aphasia and the possible roles of anterior and posterior brain regions in lexical processing and cognitive control.  相似文献   

8.
We examined whether listeners use acoustic correlates of voicing to resolve lexical ambiguities created by whispered speech in which a key feature, the voicing, is missing. Three associative priming experiments were conducted. The results showed a priming effect with whispered primes that included an intervocalic voiceless consonant (/petal/ “petal”) when the visual targets (FLEUR “flower”) were presented at the offset of the primes. A priming effect emerged with whispered primes that included a voiced intervocalic consonant (/pedal/ “pedal”) when the delay between the offset of the primes and the visual targets (VELO “bike”) was increased by 50 ms. In none of the experiments, the voiced primes (/pedal/) facilitated the processing of the targets (FLEUR) associated with the voiceless primes (/petal/). Our results suggest that the acoustic correlates of voicing are used by listeners to recover the intended words. Nonetheless, the retrieval of the voiced feature is not immediate during whispered word recognition.  相似文献   

9.
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjut?r] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjut?r] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.  相似文献   

10.
In this study, we examined the influence of various sources of constraint on spoken word recognition in a mispronunciation-detection task. Five- and 8-year-olds and adults were presented with words (intact or with word-initial or noninitial errors) from three different age-of-acquisition categories. "Intact" and "mispronounced" responses were collected for isolated words with or without a picture referent (Experiment 1) and for words in constraining or unconstraining sentences (Experiment 2). Some evidence for differential attention to word-initial as opposed to noninitial acoustic-phonetic information (and thus the influence of sequential lexical constraints on recognition) was apparent in young children's and adults' response criteria and in older children's and adults' reaction times. A more marked finding, however, was the variation in subjects' performance, according to several measures, with age and lexical familiarity (defined according to adults' subjective age-of-acquisition estimates). Children's strategies for responding to familiar and unfamiliar words in different contexts are discussed.  相似文献   

11.
Chéreau C  Gaskell MG  Dumay N 《Cognition》2007,102(3):341-360
Three experiments examined the involvement of orthography in spoken word processing using a task - unimodal auditory priming with offset overlap - taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g., dream-gleam vs. scheme-gleam). In Experiment 1, which used lexical decision, phonological overlap facilitated target responses in comparison with an unrelated condition (e.g., stove-gleam). More importantly, facilitation was modulated by degree of orthographic overlap. Experiment 2 employed the same design as Experiment 1, but with a modified procedure aimed at eliciting swifter responses. Again, the phonological priming effect was sensitive to the degree of orthographic overlap between prime and target. Finally, to test whether this orthographic boost was caused by congruency between response type and valence of the prime-target overlap, Experiment 3 used a pseudoword detection task, in which participants responded "yes" to novel words and "no" to known words. Once again phonological priming was observed, with a significant boost in the orthographic overlap condition. These results indicate a surprising level of orthographic involvement in speech perception, and provide clear evidence for mandatory orthographic activation during spoken word recognition.  相似文献   

12.
In the present study, grammatical context effects on word recognition were examined among skilled and less skilled second and sixth grade readers. Of particular interest was how the word decoding ability may correlate with the grammatical context effect. For this purpose the rich case-marking system of the Finnish language was exploited. Recognition latencies for sentence-final nouns were measured as a function of their syntactic agreement with the preceding adjective. The naming and lexical decision tasks were used as critical measures.
The study showed a clear syntactic context effect for each of the four experimental groups. The magnitude of the observed syntactic effect was substantially larger compared to earlier results. Furthermore, the effect emerged both in naming and lexical decision. In naming, less skilled 2nd grade decoders were more affected by grammatical incongruency than their more competent peers, whereas in lexical decision the skilled 6th graders differed from other groups by showing a smaller syntactic effect. The results are discussed in the light of Stanovich's interactive-compensatory model of word recognition.  相似文献   

13.
In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.  相似文献   

14.
Gradient effects of within-category phonetic variation on lexical access   总被引:7,自引:0,他引:7  
In order to determine whether small within-category differences in voice onset time (VOT) affect lexical access, eye movements were monitored as participants indicated which of four pictures was named by spoken stimuli that varied along a 0-40 ms VOT continuum. Within-category differences in VOT resulted in gradient increases in fixations to cross-boundary lexical competitors as VOT approached the category boundary. Thus, fine-grained acoustic/phonetic differences are preserved in patterns of lexical activation for competing lexical candidates and could be used to maximize the efficiency of on-line word recognition.  相似文献   

15.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

16.
We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal “postview” effect on reading word n of repeating word n at word n – 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.  相似文献   

17.
All current models of spoken word recognition propose that sound-based representations of spoken words compete with, or inhibit, one another during recognition. In addition, certain models propose that higher probability sublexical units facilitate recognition under certain circumstances. Two experiments were conducted examining ERPs to spoken words and nonwords simultaneously varying in phonotactic probability and neighborhood density. Results showed that the amplitude of the P2 potential was greater for high probability-density words and nonwords, suggesting an early inhibitory effect of neighborhood density. In order to closely examine the role of phonotactic probability, effects of initial phoneme frequency were also examined. The latency of the P2 potential was shorter for words with high initial-consonant probability, suggesting a facilitative effect of phonotactic probability. The current results are consistent with findings from previous studies using reaction time and eye-tracking paradigms and provide new insights into the time-course of lexical and sublexical activation and competition.  相似文献   

18.
When participants follow spoken instructions to pick up and move objects in a visual workspace, their eye movements to the objects are closely time-locked to referential expressions in the instructions. Two experiments used this methodology to investigate the processing of the temporary ambiguities that arise because spoken language unfolds over time. Experiment 1 examined the processing of sentences with a temporarily ambiguous prepositional phrase (e.g., "Put the apple on the towel in the box") using visual contexts that supported either the normally preferred initial interpretation (the apple should be put on the towel) or the less-preferred interpretation (the apple is already on the towel and should be put in the box). Eye movement patterns clearly established that the initial interpretation of the ambiguous phrase was the one consistent with the context. Experiment 2 replicated these results using prerecorded digitized speech to eliminate any possibility of prosodic differences across conditions or experimenter demand. Overall, the findings are consistent with a broad theoretical framework in which real-time language comprehension immediately takes into account a rich array of relevant nonlinguistic context.  相似文献   

19.
We present data from four experiments using cross-modal priming to examine the effects of competitor environment on lexical activation during the time course of the perception of a spoken word. The research is conducted from the perspective of a distributed model of speech perception and lexical representation, which focuses on activation at the level of lexical content. In this model, the strength of competition between simultaneously active lexical items depends on the degree of coherence between their distributed semantic and phonological representations. Consistent with this model, interference effects are more complete when the purely semantic aspects of these coactive representations are probed (using semantic priming) than when phonological aspects are probed as well (using repetition priming).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号