首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Skilled blind readers read French nouns with the uniqueness point in different locations, presented in unabbreviated braille, and either pronounced each item (Experiment 1) or classified it as to gender (Experiments 1-3). As in previous studies with spoken words, effects of uniqueness point location on recognition reaction time were taken as demonstrating on-line lexical access. For braille words, significant effects were obtained in Experiment 1 in the two tasks. In Experiment 2, blind Ss demonstrated comparable relative uniqueness point effects for gender classification of braille and of spoken words, showing that on-line lexical access is not specific to speech. Experiment 3 showed that the effect of uniqueness point location is limited to the higher frequency words. Finally, mean finger scanning speed did not differ between the pre- and post-uniqueness point regions of the words.  相似文献   

2.
Spoken word recognition by eye   总被引:2,自引:2,他引:0  
Spoken word recognition is thought to be achieved via competition in the mental lexicon between perceptually similar word forms. A review of the development and initial behavioral validations of computational models of visual spoken word recognition is presented, followed by a report of new empirical evidence. Specifically, a replication and extension of Mattys, Bernstein & Auer's (2002) study was conducted with 20 deaf participants who varied widely in speechreading ability. Participants visually identified isolated spoken words. Accuracy of visual spoken word recognition was influenced by the number of visually similar words in the lexicon and by the frequency of occurrence of the stimulus words. The results are consistent with the common view held within auditory word recognition that this task is accomplished via a process of activation and competition in which frequently occurring units are favored. Finally, future directions for visual spoken word recognition are discussed.  相似文献   

3.
Recognition memory for spoken words is influenced by phonetic resemblance between test words and items presented during study. Presentation of derived nonwords (e.g., /d/ransparent or transparen/d/) on a study list produces a higher than normal false recognition rate to base words (e.g., transparent). Test words that share beginning phonemes with studied nonwords have more false recognitions than do those that share ending phonemes. The latter difference has been attributed to familiarity resulting from prerecognition processing of spoken stimuli. As a listener hears/traens/, "transparent" may be activated as a potential solution. In the present experiments, we minimized contributions of postrecognition processing to this phenomenon by presenting a semantically unrelated test word (transportation) that was also expected to be activated during prerecognition stages of processing. The results indicated that false recognition was increased for words presumed to be activated only during prerecognition processing. Remember (R) and know (K) judgments revealed that the majority of studied words were R, and the majority of false recognitions were K. The lowest proportion of R judgments occurred for test words that were not activated during postrecognition processing (e.g., transportation and control words).  相似文献   

4.
This study examined whether spatial location mediates intentional forgetting of peripherally presented words. Using an item-method directed forgetting paradigm, words were presented in peripheral locations at study. A recognition test presented all words at either the same or a different location relative to study. Results showed that while recognition of Remember words was unaffected by test location, when Forget words were presented in the same location at test as at study, recognition accuracy was significantly greater than when presented in a different location. Experiment 2 showed that the speed to localize a previously studied word was faster when it was presented in the same rather than a different study-test location but that the magnitude of this spatial priming was unaffected by memory instruction. We suggest that the location of peripherally presented words is represented in memory and can aid the retrieval of poorly encoded words.  相似文献   

5.
False recognition of new test words is higher for experimental lures (e.g., universal) with initial phonemes identical to studied words (e.g., university) than for control lures. A proposed mechanism to explain this phenomenon involves implicit activation of potential solution words during the brief period of uncertainty immediately following onset of a spoken study word. Two experiments examined whether the presumed pre-recognition processing during the stimulus discovery phase of spoken word identification increased familiarity of a studied word, thereby increasing correct recognitions and estimates of presentation frequency. Critical test words were presented a single time during study in Experiment 1, and their phonologically related words were presented one, two, or three times. Correct recognition and frequency estimates of targets were enhanced by multiple presentations of associates sharing initial phonemes. Experiment 2 provided a replication with five repetitions of phonological associates during study and two study presentations of critical test words. The results of these two experiments confirmed a necessary theoretical consequence of the implicit activation mechanism that has been invoked to explain the effects of phonological similarity on false recognition.  相似文献   

6.
False recognition of new test words is higher for experimental lures (e.g., universal) with initial phonemes identical to studied words (e.g., university) than for control lures. A proposed mechanism to explain this phenomenon involves implicit activation of potential solution words during the brief period of uncertainty immediately following onset of a spoken study word. Two experiments examined whether the presumed pre‐recognition processing during the stimulus discovery phase of spoken word identification increased familiarity of a studied word, thereby increasing correct recognitions and estimates of presentation frequency. Critical test words were presented a single time during study in Experiment 1, and their phonologically related words were presented one, two, or three times. Correct recognition and frequency estimates of targets were enhanced by multiple presentations of associates sharing initial phonemes. Experiment 2 provided a replication with five repetitions of phonological associates during study and two study presentations of critical test words. The results of these two experiments confirmed a necessary theoretical consequence of the implicit activation mechanism that has been invoked to explain the effects of phonological similarity on false recognition.  相似文献   

7.
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.  相似文献   

8.
Auditory recognition without identification   总被引:1,自引:0,他引:1  
When visual recognition test items are unidentifiable--through fragmentation, for example--participants can discriminate between unidentifiable items that were presented recently and those that were not. The present study extends this recognition without identification phenomenon to the auditory modality. In several experiments, participants listened to words and were then presented with spoken recognition test items that were embedded in white noise. Participants attempted to identify each spoken word through the white noise, then rated the likelihood that the word was studied. Auditory recognition without identification was found: Participants discriminated between studied and unstudied words in the absence of an ability to identify them through white noise, even when the voice changed from male to female and when the study list was presented visually. The effect was also found when identification was hindered through the isolation of particular phonemes, suggesting that phoneme information may be present in memory traces for recently spoken words.  相似文献   

9.
Two experiments explored repetition priming effects for spoken words and pseudowords in order to investigate abstractionist and episodic accounts of spoken word recognition and repetition priming. In Experiment 1, lexical decisions were made on spoken words and pseudowords with half of the items presented twice (~12 intervening items). Half of all repetitions were spoken in a “different voice” from the first presentations. Experiment 2 used the same procedure but with stimuli embedded in noise to slow responses. Results showed greater priming for words than for pseudowords and no effect of voice change in both normal and effortful processing conditions. Additional analyses showed that for slower participants, priming is more equivalent for words and pseudowords, suggesting episodic stimulus–response associations that suppress familiarity-based mechanisms that ordinarily enhance word priming. By relating behavioural priming to the time-course of pseudoword identification we showed that under normal listening conditions (Experiment 1) priming reflects facilitation of both perceptual and decision components, whereas in effortful listening conditions (Experiment 2) priming effects primarily reflect enhanced decision/response generation processes. Both stimulus–response associations and enhanced processing of sensory input seem to be voice independent, providing novel evidence concerning the degree of perceptual abstraction in the recognition of spoken words and pseudowords.  相似文献   

10.
In four experiments, we examined the degree to which imaging written words as spoken by a familiar talker differs from direct perception (hearing words spoken by that talker) and reading words (without imagery) on implicit and explicit tests. Subjects first performed a surface encoding task on spoken, imagined as spoken, or visually presented words, and then were given either an implicit test (perceptual identification or stem completion) or an explicit test (recognition or cued recall) involving auditorily presented words. Auditory presentation at study produced larger priming effects than did imaging or reading. Imaging and reading yielded priming effects of similar magnitude, whereas imaging produced lower performance than reading on the explicit test of cued recall. Voice changes between study and test weakened priming on the implicit tests, but did not affect performance on the explicit tests. Imagined voice changes affected priming only in the implicit task of stem completion. These findings show that the sensitivity of a memory test to perceptual information, either directly perceived or imagined, is an important dimension for dissociating incidental (implicit) and intentional (explicit) retrieval processes.  相似文献   

11.
Subjects were introduced to one male and one female voice by a tape recording with instructions to attend to characteristics of the voices. Then 18 pairs of words were presented visually on slides. The subject’s task during each 10-sec interslide interval was to repeat silently the pair of words over and over again in the male voice, in the female voice, or in the subject’s own voice. A surprise recognition test for the words indicated that the words were more likely to be recognized if they were spoken in the same Voice at test as was used to repeat them during presentation. Recognition of the words repeated in the subject’s own voice was not affected by the sex of the speaker at test. In Experiment 2, different speakers were used at test than those used by the subjects to repeat the words. The interaction between the sex of voice used at encoding and at test was again significant, but recognition was generally lower than in Experiment 1. It was concluded that it is not necessary to assume that subjects have literal copies of spoken words in memory but speaker’s voice does form an integral part of the verbal memory code and its influence is specific to a given speaker as well as to a given class of speakers (male or female).  相似文献   

12.
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming latencies were collected. Across experiments, tone of voice was either blocked or mixed with respect to emotional meaning. The results suggest that emotional tone of voice facilitated linguistic processing of emotional words in an emotion-congruent fashion. These findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner.  相似文献   

13.
Previous research has shown that words presented on metaphor congruent locations (e.g., positive words UP on the screen and negative words DOWN on the screen) are categorized faster than words presented on metaphor incongruent locations (e.g., positive words DOWN and negative words UP). These findings have been explained in terms of an interference effect: The meaning associated with UP and DOWN vertical space can automatically interfere with the categorization of words with a metaphorically incongruent meaning. The current studies test an alternative explanation for the interaction between the vertical position of abstract concepts and the speed with which these stimuli are categorized. Research on polarity differences (basic asymmetries in the way dimensions are processed) predicts that +polar endpoints of dimensions (e.g., positive, moral, UP) are categorized faster than -polar endpoints of dimensions (e.g., negative, immoral, DOWN). Furthermore, the polarity correspondence principle predicts that stimuli where polarities correspond (e.g., positive words presented UP) provide an additional processing benefit compared to stimuli where polarities do not correspond (e.g., negative words presented UP). A meta-analysis (Study 1) shows that a polarity account provides a better explanation of reaction time patterns in previous studies than an interference explanation. An experiment (Study 2) reveals that controlling for the polarity benefit of +polar words compared to -polar words did not only remove the main effect of word polarity but also the interaction between word meaning and vertical position due to polarity correspondence. These results reveal that metaphor congruency effects should not be interpreted as automatic associations between vertical locations and word meaning but instead are more parsimoniously explained by their structural overlap in polarities.  相似文献   

14.
Online comprehension of naturally spoken and perceptually degraded words was assessed in 95 children ages 12 to 31 months. The time course of word recognition was measured by monitoring eye movements as children looked at pictures while listening to familiar target words presented in unaltered, time-compressed, and low-pass-filtered forms. Success in word recognition varied with age and level of vocabulary development, and with the perceptual integrity of the word. Recognition was best overall for unaltered words, lower for time-compressed words, and significantly lower in low-pass-filtered words. Reaction times were fastest in compressed, followed by unaltered and filtered words. Results showed that children were able to recognize familiar words in challenging conditions and that productive vocabulary size was more sensitive than chronological age as a predictor of children's accuracy and speed in word recognition.  相似文献   

15.
Facilitation of auditory word recognition   总被引:4,自引:0,他引:4  
An experiment that investigated facilitation of recognition of spoken words presented in noise IS described, Prior to the test session, the subjects either read words or heard them spoken in one of two. voices while making a semantic judgment upon them. There was a large effect of auditory priming on word recognition that did not depend upon the voice (male or female) of presentation, There were much smaller, but significant, effects of prior visual experience of the words. The implications of these data for the logogen model are discussed.  相似文献   

16.
Online comprehension of naturally spoken and perceptually degraded words was assessed in 95 children ages 12 to 31 months. The time course of word recognition was measured by monitoring eye movements as children looked at pictures while listening to familiar target words presented in unaltered, time-compressed, and low-pass-filtered forms. Success in word recognition varied with age and level of vocabulary development, and with the perceptual integrity of the word. Recognition was best overall for unaltered words, lower for time-compressed words, and significantly lower in low-pass-filtered words. Reaction times were fastest in compressed, followed by unaltered and filtered words. Results showed that children were able to recognize familiar words in challenging conditions and that productive vocabulary size was more sensitive than chronological age as a predictor of children's accuracy and speed in word recognition.  相似文献   

17.
The neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998) of spoken word recognition was applied to the problem of predicting accuracy of visual spoken word identification. One hundred fifty-three spoken consonant-vowel-consonant words were identified by a group of 12 college-educated adults with normal hearing and a group of 12 college-educated deaf adults. In both groups, item identification accuracy was correlated with the computed NAM output values. Analysis of subsets of the stimulus set demonstrated that when stimulus intelligibility was controlled, words with fewer neighbors were easier to identify than words with many neighbors. However, when neighborhood density was controlled, variation in segmental intelligibility was minimally related to identification accuracy. The present study provides evidence of a common spoken word recognition system for both auditory and visual speech that retains sensitivity to the phonetic properties of the input.  相似文献   

18.
With a new metric called phonological Levenshtein distance (PLD20), the present study explores the effects of phonological similarity and word frequency on spoken word recognition, using polysyllabic words that have neither phonological nor orthographic neighbors, as defined by neighborhood density (the N-metric). Inhibitory effects of PLD20 were observed for these lexical hermits: Close-PLD20 words were recognized more slowly than distant PLD20 words, indicating lexical competition. Importantly, these inhibitory effects were found only for low- (not high-) frequency words, in line with previous findings that phonetically related primes inhibit recognition of low-frequency words. These results indicate that the properties of PLD20--a continuous measure of word-form similarity--make it a promising new metric for quantifying phonological distinctiveness in spoken word recognition research.  相似文献   

19.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

20.
This study investigated whether English speakers retained the lexical stress patterns of newly learned Spanish words. Participants studied spoken Spanish words (e.g., DUcha [shower], ciuDAD [city]; stressed syllables in capital letters) and subsequently performed a recognition task, in which studied words were presented with the same lexical stress pattern (DUcha) or the opposite lexical stress pattern (CIUdad). Participants were able to discriminate same- from opposite-stress words, indicating that lexical stress was encoded and used in the recognition process. Word-form similarity to English also influenced outcomes, with Spanish cognate words and words with trochaic stress (MANgo) being recognized more often and more quickly than Spanish cognate words with iambic stress (soLAR) and noncognates. The results suggest that while segmental and suprasegmental features of the native language influence foreign word recognition, foreign lexical stress patterns are encoded and not discarded in memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号