共查询到20条相似文献,搜索用时 15 毫秒
1.
Lachs L Pisoni DB 《Journal of experimental psychology. Human perception and performance》2004,30(2):378-396
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker. 相似文献
2.
We studied the processing of two word strings in French made up of a determiner and a noun which contains a schwa (mute e). Depending on the noun, schwa deletion is present ("la tir'lire"), optional ("le ch(e)min") or absent ("la cornemuse"). In a production study, we show that schwa deletion, and the category of the noun, have a large impact on the duration of the strings. We take this into account in two perception studies, which use word repetition and lexical decision, and which show that words in which the schwa has been deleted usually take longer to recognize than words that retain the schwa, but that this depends also on the category of the word. We explain these results by examining the influence of orthography. Based on the model proposed by Grainger and Ferrand (1996), which integrates the written dimension, we suggest that two sources of information, phonological and orthographic, interact during spoken word recognition. 相似文献
3.
T Deelman C M Connine 《Journal of experimental psychology. Human perception and performance》2001,27(3):656-663
Cross-modal semantic priming and phoneme monitoring experiments investigated processing of word-final nonreleased stop consonants (e.g., kit may be pronounced /kit/ or /ki/), which are common phonological variants in American English. Both voiced /d/ and voiceless /t/ segments were presented in release and no-release versions. A cross-modal semantic priming task (Experiment 1) showed comparable priming for /d/ and /t/ versions. A second set of stimuli ending in /s/ were presented as intact, missing /s/, or with a mismatching final segment and showed significant but reduced priming for the latter two conditions. Experiment 2 showed that phoneme monitoring reaction time for release and no-release words and onset mismatching stimuli (derived pseudowords) increased as acoustic-phonetic similarity to the intended word decreased. The results suggest that spoken word recognition does not require special mechanisms for processing no-release variants. Rather, the results can be accounted for by means of existing assumptions concerning probabilistic activation that is based on partial activation. 相似文献
4.
The purpose of this study was to measure subjects' ability to detect deliberate stressed, front-vowel misarticulations embedded in two-syllable words. Reaction times to words with various vowel-height misarticulations were examined for 25 women to assess the effect of a specific vowel height change on listeners' ability to recognize a word. Statistical analysis indicated no significant differences between reaction times to initial, stressed vowel changes along the height dimension, suggesting that subjects responded similarly to all vowel errors on the detection task. This finding provides further evidence that stressed-vowel information may serve as a perceptual anchor in guiding a listener during word recognition. 相似文献
5.
6.
François Grosjean Jean-Yves Dommergues Etienne Cornu Delphine Guillelmon Carole Besson 《Attention, perception & psychophysics》1994,56(5):590-598
In this paper we examine whether the recognition of a spoken noun is affected by the gender marking—masculine or feminine—that is carried by a preceding word. In the first of two experiments, the gating paradigm was used to study the access of French nouns that were preceded by an appropriate gender marking, carried by an article, or preceded by no gender marking. In the second experiment, subjects were asked to make a lexical decision on the same material. A very strong facilitatory effect was found in both cases. The origin of the gender-marking effect is discussed, as well as the level of processing involved—lexical or syntactic. 相似文献
7.
Alisa M. Maibauer Teresa A. Markis Jessica Newell Conor T. McLennan 《Attention, perception & psychophysics》2014,76(1):11-18
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers. 相似文献
8.
Phonological priming in spoken word recognition: Task effects 总被引:3,自引:0,他引:3
In two experiments, we examined the role of phonological relatedness between spoken items using both the lexical decision task and the shadowing task. In Experiment 1, words were used as primes and overlaps of zero (control), one, two, or all four or five (repetition) phonemes were compared. Except for the repetition conditions, in which facilitation was found, phonological overlap resulted in interference on word responses. These effects occurred in both tasks but were larger in lexical decision than in shadowing. The effects that were evident in shadowing can be attributed to an attentional mechanism linked to the subjects' expectancies of repetitions. The extra effects obtained in lexical decision can be interpreted by taking into account both activation of the response corresponding to the prime's lexical status and postlexical processes that check for phonological congruency between prime and target. In Experiment 2, some modifications were introduced to prevent the involvement of strategic factors, and pseudowords were used as primes. No effect at all was observed in shadowing, whereas in lexical decision interference effects occurred, which is consistent with the hypothesis that lexical decision may be negatively affected by finding a phonological discrepancy at the same time as the primed response is reactivated. Neither experiment provided evidence for the occurrence of phonological priming in the perceptual processing of words. 相似文献
9.
The role of grammatical gender for auditory word recognition in German was investigated in three experiments and two sets of corpus analyses. In the corpus analyses, gender information reduced the lexical search space as well as the amount of input needed to uniquely identify a word. To test whether this holds for on-line processing, two auditory lexical decision experiments (Experiments 1 and 3) were conducted using valid, invalid, or noise-masked articles as primes. Clear gender-priming effects were obtained in both experiments. Experiment 2 used phoneme monitoring with words and with pseudowords deviating from base words in one or more phonological features. Contrary to the lexical decision latencies, phoneme-monitoring latencies showed no influence of gender but did show similarity mismatch effects. We argue that gender information is not utilized early during word recognition. Rather, the presence of a valid article increases the initial familiarity of a word, facilitating subsequent responses. 相似文献
10.
11.
Gow DW 《Perception & psychophysics》2003,65(4):575-590
For listeners to recognize words, they must map temporally distributed phonetic feature cues onto higher order phonological representations. Three experiments are reported that were performed to examine what information listeners extract from assimilated segments (e.g., place-assimilated tokens of cone that resemble comb) and how they interpret it. Experiment 1 employed form priming to demonstrate that listeners activate the underlying form of CONE, but not of its neighbor (COMB). Experiment 2 employed phoneme monitoring to show that the same assimilated tokens facilitate the perception of postassimilation context. Together, the results of these two experiments suggest that listeners recover both the underlying place of the modified item and information about the subsequent item from the same modified segment. Experiment 3 replicated Experiment 1, using different postassimilation contexts to demonstrate that context effects do not reflect familiarity with a given assimilation process. The results are discussed in the context of general auditory grouping mechanisms. 相似文献
12.
《Quarterly journal of experimental psychology (2006)》2013,66(4):772-783
For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as “OCtopus” (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors (“okTOber”) before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than noninitially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal. 相似文献
13.
Using the cross-modal priming paradigm, we attempted to determine whether semantic representations for word-final morphemes embedded in multisyllabic words (e.g., /lak/ in /hεmlak/) are independently activated in memory. That is, we attempted to determine whether the auditory prime, /hεmlak/, would facilitate lexical decision times to the visual target,key, even when the recognition point for/hεmlak / occurred prior to the end of the word, which should ensure deactivation of all lexical candidates. In the first experiment, a gating task was used in order to ensure that the multisyllabic words could be identified prior to their offsets. In the second experiment, lexical decision times for visually presented targets following spoken monosyllabic primes (e.g., /lak/-key) were compared with reaction times for the same visual targets following multisyllabic pairs (/hεmlak/-KEY). Significant priming was found for both the monosyllabic and the multisyllabic conditions. The results support a recognition strategy that initiates lexical access at strong syllables (Cutler & Norris, 1988) and operates according to a principle of delayed commitment (Marr, 1982). 相似文献
14.
Continuous uptake of acoustic cues in spoken word recognition 总被引:3,自引:0,他引:3
15.
The nature of phoneme representation in spoken word recognition 总被引:1,自引:0,他引:1
Gaskell MG Quinlan PT Tamminen J Cleland AA 《Journal of experimental psychology. General》2008,137(2):282-302
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus onset asynchronies. In Experiment 1, Task 2 difficulty was manipulated using words containing matching or mismatching coarticulatory cues to the final consonant. The results showed that difficulty and onset asynchrony combined in an underadditive way, suggesting that the phonemic mismatch was resolved prior to a central decisional bottleneck. Similar results were found in Experiment 2 using nonwords. In Experiment 3, the manipulation of task difficulty involved lexical status, which once again revealed an underadditive pattern of response times. Finally, Experiment 4 compared this prebottleneck variable with a decisional variable: response key bias. The latter showed an additive pattern of responses. The experiments show that resolution of phonemic ambiguity can take advantage of cognitive slack time at short asynchronies, indicating that phonemic integration takes place at a relatively early stage of spoken word recognition. 相似文献
16.
This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing. 相似文献
17.
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None of the existing approaches were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. 相似文献
18.
Roelofs A Ozdemir R Levelt WJ 《Journal of experimental psychology. Learning, memory, and cognition》2007,33(5):900-913
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 相似文献
19.
Julia Strand Andrea Simenstad Allison Cooperman Jonathon Rowe 《Memory & cognition》2014,42(4):676-687
When perceiving spoken language, listeners must match the incoming acoustic phonetic input to lexical representations in memory. Models that quantify this process propose that the input activates multiple lexical representations in parallel and that these activated representations compete for recognition (Weber & Scharenborg, 2012). In two experiments, we assessed how grammatically constraining contexts alter the process of lexical competition. The results suggest that grammatical context constrains the lexical candidates that are activated to grammatically appropriate competitors. Stimulus words with little competition from items of the same grammatical class benefit more from the addition of grammatical context than do words with more within-class competition. The results provide evidence that top-down contextual information is integrated in the early stages of word recognition. We propose adding a grammatical class level of analysis to existing models of word recognition to account for these findings. 相似文献
20.
Revill KP Tanenhaus MK Aslin RN 《Journal of experimental psychology. Learning, memory, and cognition》2008,34(5):1207-1223
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models. 相似文献