共查询到8条相似文献,搜索用时 0 毫秒
1.
The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are
spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were
either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally spread across two letter positions (e.g.,pelle/celle-selle-telle-perle), or were equally spread across two letter positions (e.g.,litre/titre-vitre-libre-livre). Predictions based on the interactive activation model [McClelland & Rumelhart (1981). Psychological Review, 88, 375–401]
were generated by running simulations and were confirmed in the lexical decision task. Data showed that words were more rapidly
identified when they had spread neighbors rather than concentrated neighbors. Furthermore, within the set of spread neighbors,
words were more rapidly recognized when they had equally rather than unequally spread neighbors. The findings are explained
in terms of activation and inhibition processes in the interactive activation framework. 相似文献
2.
Orthographic learning via self-teaching in children learning to read English: effects of exposure, durability, and context 总被引:1,自引:0,他引:1
This experiment investigated orthographic learning via self-teaching in 8- and 9-year-olds learning to read English. Children were exposed to novel words, and following a 1- or 7-day delay interval, orthographic learning was assessed by asking children to select previously seen novel words from an array of visually and phonologically similar foils. Novel words were exposed either in meaningful text or in isolation, and number of exposures was manipulated with each novel word appearing once, twice, or four times. Learning increased as a function of number of exposures, although some evidence of durable one-trial learning was observed. Context played no role, suggesting that orthographic learning is not dependent on meaning-based information. In general, these findings offer support for the central aspects of Share's self-teaching hypothesis. However, although we observed a general relation between phonological decoding and orthographic learning, the relation did not hold at an item-by-item level of analysis, suggesting that a strong version of Share's item-based account is not correct. 相似文献
3.
A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the orthographic level. This analysis implies that it should be possible to negate the LVF/RH N effect and create an RVF/LH N effect by manipulating contrast levels in specific ways. In Experiment 1, these predictions were confirmed. In Experiment 2, we eliminated the N effect for both LVF/RH and central presentation. These results indicate that the letter level is the primary locus of the N effect under lexical decision, and that the hemispheric specificity of the N effect does not reflect differential processing at the lexical level. 相似文献
4.
Hsiao JH 《Brain and language》2011,119(2):89-98
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. 相似文献
5.
Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition 下载免费PDF全文
In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. 相似文献
6.
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face recognition to visual word recognition; the model implements a theory of hemispheric asymmetry in perception that posits low spatial frequency biases in the right hemisphere and high spatial frequency (HSF) biases in the LH. We show two factors that can influence lateralization: (a) Visual similarity among words: The more similar the words in the lexicon look visually, the more HSF/LH processing is required to distinguish them, and (b) Requirement to decompose words into graphemes for grapheme‐phoneme mapping: Alphabetic reading (involving grapheme‐phoneme conversion) requires more HSF/LH processing than logographic reading (no grapheme‐phoneme mapping). These factors may explain the difference in lateralization between English and Chinese orthographic processing. 相似文献
7.
Two experiments examined if visual word access varies cross-linguistically by studying Spanish/English adult bilinguals, priming
two syllable CVCV words both within (Experiment 1) and across (Experiment 2) syllable boundaries in the two languages. Spanish
readers accessed more first syllables based on within syllable primes compared to English readers. In contrast, syllable-based
primes helped English readers recognize more words than in Spanish, suggesting that experienced English readers activate a
larger unit in the initial stages of word recognition. Primes spanning the syllable boundary affected readers of both languages
in similar ways. In this priming context, primes that did not span the syllable boundary helped Spanish readers recognize
more syllables, while English readers identified more words, further confirming the importance of the syllable in Spanish
and suggesting a larger unit in English. Overall, the experiments provide evidence that readers use different units in accessing
words in the two languages. 相似文献