首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A semantic relatedness decision task was used to investigate whether phonological recoding occurs automatically and whether it mediates lexical access in visual word recognition and reading. In this task, subjects read a pair of words and decided whether they were related or unrelated in meaning. In Experiment 1, unrelated word-homophone pairs (e.g., LION-BARE) and their visual controls (e.g., LION-BEAN) as well as related word pairs (e.g., FISH-NET) were presented. Homophone pairs were more likely to be judged as related or more slowly rejected as unrelated than their control pairs, suggesting phonological access of word meanings. In Experiment 2, word-pseudohomophone pairs (e.g., TABLE-CHARE) and their visual controls (e.g., TABLE-CHARK) as well as related and unrelated word pairs were presented. Pseudohomophone pairs were more likely to be judged as related or more slowly rejected as unrelated than their control pairs, again suggesting automatic phonological recoding in reading.  相似文献   

2.
We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.  相似文献   

3.
Four experiments were conducted to determine whether semantic feedback spreads to orthographic and/or phonological representations during visual word recognition and whether such feedback occurs automatically. Three types of prime-target word pairs were used within the mediated-priming paradigm: (1) homophonically mediated (e.g., frog-[toad]-towed), (2) orthographically mediated (e.g., frog-[toad]-told), and (3) associatively related (e.g., frog-toad). Using both brief (53 msec; Experiment 1) and long (413 msec; Experiment 3) prime exposure durations, significant facilitatory-priming effects were found in the response time data with orthographically, but not homophonically, mediated prime-target word pairs. When the prime exposure duration was shortened to 33 msec in Experiment 4, however, facilitatory priming was absent with both orthographically and homophonically mediated word pairs. In addition, with a brief (53-msec) prime exposure duration, direct-priming effects were found with associatively (e.g., frog-toad), orthographically (e.g., toad-told), and homophonically (e.g., toad-towed) related word pairs in Experiment 2. Taken together, these results indicate that following the initial activation of semantic representations, activation automatically feeds back to orthographic, but not phonological, representations during the early stages of word processing. These findings were discussed in the context of current accounts of visual word recognition.  相似文献   

4.
Naming an object in the context of other objects requires the selection and processing of the target object at different levels, while the processing of competing representations activated by context objects has to be constrained. At what stage are these competing representations attenuated? To address this question, we presented pairs of target and context objects that were either similar in visual shape (e.g., umbrella–palm tree) or dissimilar in visual shape (e.g., umbrella–ladder), so that the context object would attract various amounts of attention. The activation of the context object at different levels of processing was assessed by means of auditory distractors (semantically related, or phonologically related, or unrelated to the context object). Semantic and phonological distractor effects were observed for shape-related object pairs, but not for unrelated object pairs. This finding suggests that context objects do not activate their associated lexical representations to any substantial amount, unless they capture attention. In that case, they undergo full lexical processing up to a phonological level. Implications for models of word production are discussed.  相似文献   

5.
While previous research has demonstrated that words can be processed more rapidly and/or more accurately than random strings of letters, it has not been convincingly demonstrated that the superior processing of words is a visual effect. In the present experiment, the cases of letters were manipulated in letter strings that were to be compared on the basis of physical identity. Mean response time was shorter for words than for nonwords even for pairs of letter strings that differed only in case (e.g., site-site). This finding implies that the advantage of words over nonwords (the familiarity effect) typically observed in the simultaneous matching task is not due solely to comparison of either the word names or the letter names and, thus, that at least part of the familiarity effect must be due to more rapid formation and/or comparison of visual representations of the two letter strings when they are words. Further analysis failed to reveal a significant involvement of phonemic or lexical codes in the comparison judgments.  相似文献   

6.
A considerable body of empirical and theoretical research suggests that morphological structure governs the representation of words in memory and that many words are decomposed into morphological components in processing. The authors investigated an alternative approach in which morphology arises from the interaction of semantic and phonological codes. A series of cross-modal lexical decision experiments shows that the magnitude of priming reflects the degree of semantic and phonological overlap between words. Crucially, moderately similar items produce intermediate facilitation (e.g., lately-late). This pattern is observed for word pairs exhibiting different types of morphological relationships, including suffixed-stem (e.g., teacher-teach), suffixed-suffixed (e.g., saintly-sainthood), and prefixed-stem pairs (preheat-heat). The results can be understood in terms of connectionist models that use distributed representations rather than discrete morphemes.  相似文献   

7.
We examined phonological priming in illiterate adults, using a cross-modal picture-word interference task. Participants named pictures while hearing distractor words at different Stimulus Onset Asynchronies (SOAs). Ex-illiterates and university students were also tested. We specifically assessed the ability of the three populations to use fine-grained, phonemic units in phonological encoding of spoken words. In the phoneme-related condition, auditory words shared only the first phoneme with the target name. All participants named pictures faster with phoneme-related word distractors than with unrelated word distractors. The results thus show that phonemic representations intervene in phonological output processes independently of literacy. However, the phonemic priming effect was observed at a later SOA in illiterates compared to both ex-illiterates and university students. This may be attributed to differences in speed of picture identification.  相似文献   

8.
Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.  相似文献   

9.
Models of word production and comprehension can be split into two broad classes: localist and distributed. In localist architectures each word within the lexicon is represented by a single unit. The distributed approach, on the other hand, encodes each lexical item as a pattern of activation across a set of shared units. If we assume that the localist representations are more than a convenient shorthand for distributed representations at the neuroanatomical level, it should be possible to find patients who, after brain injury, have lost specific words from their premorbid vocabulary.Following a closed head injury, JS had severe word-finding difficulties with no measurable semantic impairment nor did he make phonological errors in naming. Cueing with an initial phoneme proved relatively ineffective. JS showed a high degree of item consistency across three administrations of two tests of naming to confrontation. This consistency could not be predicted from a linear combination of psycholinguistic variables but the distribution fitted a stochastic model in which it is assumed that a proportion of items have become consistently unavailable.Further evidence is presented which suggests that these items are not, in fact, lost but rather have a very low probability of retrieval. Given phonemic cueing of sufficient length, or delayed repetition priming from a written word, the consistently unnamed items were produced by JS. Additional data is reported which seems to support a distributed model of speech production. JS's naming accuracy for one set of pictures was found to predict his performance on a second set of items only when the names of the pictures were both semantically and phonologically related (e.g., cat–rat). There was no association for pairs of pictures if they were only semantically (e.g., cat–dog) or phonologically related (e.g., cat–cap).It is argued that JS's data are best described in terms of a graded, non-linear, distributed model of speech production.  相似文献   

10.
Cross‐situational statistical learning of words involves tracking co‐occurrences of auditory words and objects across time to infer word‐referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words (e.g., WUG, DAX), but it remains unknown whether learners can encode fine phonological differences during cross‐situational statistical learning. This study examined learners’ cross‐situational statistical learning of minimal pairs that differed on one consonant segment (e.g., BON–TON), minimal pairs that differed on one vowel segment (e.g., DEET–DIT), and non‐minimal pairs that differed on two or three segments (e.g., BON–DEET). Learners performed above chance for all pairs, but performed worse on vowel minimal pairs than on consonant minimal pairs or non‐minimal pairs. These findings demonstrate that learners can encode fine phonetic detail while tracking word‐referent co‐occurrence probabilities, but they suggest that phonological encoding may be weaker for vowels than for consonants.  相似文献   

11.
The three experiments reported in this study were each conducted in two phases. The first phase of Experiment 1 involved a same-different comparison task requiring “same” responses for both mixed-case (e.g., MAIN main) and pure-case (e.g., near near) pairs. This was followed by Phase 2, a surprise recognition test in which a graphemic effect on word retention was indicated by the superior recognition accuracy obtained for pure-case compared with mixed-case pairs. The first phases of Experiments 2 and 3 involved pronounceability and imageability judgment tasks, respectively. Graphemic retention was assessed by contrasting recognition accuracy for letter strings presented, during Phase 2, in their original Phase 1 case, with letter strings presented, during Phase 2, in. a graphemically dissimilar new case. The experiments provided evidence that there was minimal retention of the graphemic representations from which the phonemic representations of words are generated and, further, that the locus of this effect is probably postlexical. Nonwords were recognized more accurately than words in all three experiments. The latter result was attributed to differences between nonwords and words in both graphemic retention and semantic distinctiveness.  相似文献   

12.
Neurobiological models of reading account for two ways in which orthography is converted to phonology: (1) familiar words, particularly those with exceptional spelling-sound mappings (e.g., shoe) access their whole-word lexical representations in the ventral visual stream, and (2) orthographically unfamiliar words, particularly those with regular spelling-sound mappings (i.e., pseudohomophones [PHs], which are orthographically novel but sound like real words; e.g., shue) are phonetically decoded via sublexical processing in the dorsal visual stream. The present study used a naming task in order to compare naming reaction time (RT) and response duration (RD) of exception and regular words to their PH counterparts. We replicated our earlier findings with words, and extended them to PH phonetic decoding by showing a similar effect on RT and RD of matched PHs. Given that the shorter RDs for exception words can be attributed to the benefit of whole-word processing in the orthographic word system, and the longer RTs for exception words to the conflict with phonetic decoding, our PH results demonstrate that phonetic decoding also involves top-down feedback from phonological lexical representations (e.g., activated by shue) to the orthographic representations of the corresponding correct word (e.g., shoe). Two computational models were tested for their ability to account for these effects: the DRC and the CDP+. The CDP+ fared best as it was capable of simulating both the regularity and stimulus type effect on RT for both word and PH identification, although not their over-additive interaction. Our results demonstrate that both lexical reading and phonetic decoding elicit a regularity dissociation between RT and RD that provides important constraints to all models of reading, and that phonetic decoding results in top-down feedback that bolsters the orthographic lexical reading process.  相似文献   

13.
To test the hypothesis that native language (L1) phonology can affect the lexical representations of nonnative words, a visual semantic-relatedness decision task in English was given to native speakers and nonnative speakers whose L1 was Japanese or Arabic. In the critical conditions, the word pair contained a homophone or near-homophone of a semantically associated word, where a near-homophone was defined as a phonological neighbor involving a contrast absent in the speaker’s L1 (e.g., ROCK-LOCK for native speakers of Japanese). In all participant groups, homophones elicited more false positive errors and slower processing than spelling controls. In the Japanese and Arabic groups, near-homophones also induced relatively more false positives and slower processing. The results show that, even when auditory perception is not involved, recognition of nonnative words and, by implication, their lexical representations are affected by the L1 phonology.  相似文献   

14.
In lexical decision experiments, subjects have difficulty in responding NO to non-words which are pronounced exactly like English words (e.g. BRANE). This does not necessarily imply that access to a lexical entry ever occurs via a phonological recoding of a visually-presented word. The phonological recoding procedure might be so slow that when the letter string presented is a word, access to its lexical entry via a visual representation is always achieved before phonological recoding is completed. If prelexical phonological recodings are produced by using grapheme-phoneme correspondence rules, such recodings can only occur for words which conform to these rules (regular words), since applications of the rules to words which do not conform to the rules (exception words) produce incorrect phonological representations. In two experiments, it was found that time to achieve lexical access (as measured by YES latency in a lexical decision task) was equivalent for regular words and exception words. It was concluded that access to lexical entries in lexical decision experiments of this sort does not proceed by sometimes or always phonologically recoding visually-presented words.  相似文献   

15.
In some English words is a silent letter in the letter strings, e.g., PSALM. This type of word provides room to manipulate phonological similarity against the words with a nonsilent letter in the corresponding position, e.g., PASTA, to test the phonological recoding hypothesis. Letter strings excluding the silent letter or the sounding letter, e.g., _salm and a phonological condition, _asta as an orthographic condition, were presented. A "psalm-type word" was processed faster than pasta-type word," indicating that phonology plays a leading role in word recognition.  相似文献   

16.
The present study investigated strategic variation in reliance on phonological mediation in visual word recognition. In Experiment 1, semantically related or unrelated word primes preceded word, pseudohomophone (e.g.,trane), or nonpseudohomophone (e.g.,trank) targets in a lexical decision task. Semantic priming effects were found for words, and response latencies to pseudohomophones were longer in related than in unrelated prime conditions. In Experiment 2, related or unrelated word primes preceded word or pseudohomophone targets. A relatedness effect was found for words, although it was significant at a 600-msec prime-target stimulus onset asynchrony (SOA) and not at a 200-msec SOA. There was no relatedness effect for pseudohomophones. Experiment 3 was a replication of Experiment 2, except that pseudohomophones were replaced by nonpseudohomophonic orthographic controls. Facilitation effects for related target words were greater in Experiment 3 than in Experiment 2. The results reflect apparent variations in the expectation that a related prime reliably indicates that a target is a word. Although reliance on phonological mediation might be strategically contingent, there could be a brief time period in which phonologically mediated lexical access occurs automatically. Whether phonological information is maintained or suppressed subsequently depends on its overall usefulness for the task.  相似文献   

17.
The notion of feedback activation from semantics to both orthography and phonology has recently been used to explain a number of semantic effects in visual word recognition, including polysemy effects (Hino & Lupker, 1996; Pexman & Lupker, 1999) and synonym effects (Pecher, 2001). In the present research, we tested an account based on feedback activation by investigating a new semantic variable: number of features (NOF). Words with high NOF (e.g., LION) should activate richer semantic representations than do words with low NOF (e.g., LIME). As a result, the feedback activation from semantics to orthographic and phonological representations should be greater for high-NOF words, which should produce superior lexical decision task (LDT) and naming task performance. The predicted facilitory NOF effects were observed in both LDT and naming.  相似文献   

18.
This experiment investigates whether the influence of spelling-to-sound correspondence on lexical decision may be due to the visual characteristics of irregular words rather than to irregularities in their phonological correspondence. Lexical decision times to three types of word were measured: words with both irregular orthography and spelling-to-sound correspondence (e.g., GHOUL, CHAOS), words with regular orthography but irregular spelling-to-sound correspondence (e.g., GROSS, LEVER), and words regular in both respects (e.g., SHACK, PLUG). Items were presented in upper- and lowercase in order to examine the influence of “word shape” on any irregularity effects obtained. The results showed that irregular words with regular orthographies were identified more slowly than regular words in both upper- and lowercase. Words that are both orthographically and phonologically irregular were identified much more slowly with lowercase presentation. However, with uppercase, the lexical decision time for these items did not differ significantly from those of regular words. These data indicate that previous demonstrations of the regularity effect in lexical decision were not due to the unusual visual characteristics of some of the words sampled. In addition, the data emphasize the role of word shape in word recognition and also suggest that words with unique orthographies may be processed differently from those whose orthography is similar to other words.  相似文献   

19.
Nonstrategic subjective threshold effects in phonemic masking   总被引:1,自引:0,他引:1  
Three backward-masking experiments demonstrated that the magnitude of the phonemic mask reduction effect (MRE) is a function of subjective threshold and that the magnitude is also independent of stimulus-based response strategies. In all three experiments, a target word (e.g., bake) was backward masked by a graphemically similar nonword (e.g., BAWK), a phonemically similar nonword (e.g., BAIK), or an unrelated control (e.g., CRUG). Experiments 1 and 2 had a low percentage (9%) of trials with phonemic masks and differed only in baseline identification rate. Experiment 3 controlled baseline identification rate at below and above subjective threshold levels, with 9% phonemic trials. The results were that identification rates were higher with phonemic masks than with graphemic masks, irrespective of the low percentage of phonemic trials. However, the magnitude of the phonemic MRE became large only when the baseline identification rate was below subjective threshold. The pattern of the phonemic MRE was interpreted as a result of rapid automatic phonological activation, independent of stimulus-based processing strategies.  相似文献   

20.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号