首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study investigated the orthographic and phonological contribution of visually masked primes to reading aloud in Dutch. Although there is a relatively clear mapping between the spelling and sound of words in Dutch, words starting with the letter c are ambiguous as to whether they begin with the phoneme /S/ (e.g., citroen, “lemon”) or with the phoneme /k/ (e.g., complot, “conspiracy”). Therefore, using words of this type, one can tease apart the contributions of orthographic and phonological activation in reading aloud. Dutch participants read aloud bisyllabic c-initial target words, which were preceded by visually masked, bisyllabic prime words that either shared the initial phoneme with the target (phonologically related) or the first grapheme (orthographically related) or both (phonologically and orthographically related). Unrelated primes did not share the first segment with the target. Response latencies in the phonologically related conditions were shorter than those in the unrelated condition. However, primes that were orthographically related did not speed up responses. One may conclude that the nature of the onset effect in reading aloud is phonological and not orthographic.  相似文献   

2.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

3.
4.
In order to investigate whether syllable frequency effects in visual word recognition can be attributed to phonologically or orthographically defined syllables, we designed one experiment that allowed six critical comparisons. Whereas only a weak effect was obtained when both orthographic and phonological syllable frequency were conjointly manipulated in Comparison 1, robust effects for phonological and null effects for orthographic syllable frequency were found in Comparisons 2 and 3. Comparisons 4 and 5 showed that the syllable frequency effect does not result from a confound with the frequency of letter or phoneme clusters at the beginning of words. The syllable frequency effect was shown to diminish with increasing word frequency in Comparison 6. These results suggest that visually presented polysyllabic words are parsed into phonologically defined syllables during visual word recognition. Materials and links may be accessed at www.psychonomic.org/archive.  相似文献   

5.
Two databases of Spanish surface word forms are presented. Surface word forms are words considered as orthographically or phonologically specified without reference to their meaning or syntactic category. The databases are based on the productive written vocabulary of children between the ages of 6 and 10 years. Statistical and structural information is presented concerning surface word-form frequency, consonant-vowel (CV) structure, number of syllables, syllables, syllable CV structure, and subsyllabic units. LEX I was intended to aid in the study of reading processes. Entries were orthographic surface word forms; words were divided in their components following orthographic criteria. LEX II was designed for spoken language research. Accordingly, words were transcribed phonologically and phonological criteria were applied in extracting the internal units. Information about stress location was also provided. Together, LEX I and LEX II represent a useful tool for psycholinguists interested in the study of people acquiring Spanish as a first or foreign language and of Spanish-speaking populations in general.  相似文献   

6.
The eye movements of Finnish first and second graders were monitored as they read sentences where polysyllabic words were either hyphenated at syllable boundaries, alternatingly coloured (every second syllable black, every second red) or had no explicit syllable boundary cues (e.g., ta-lo vs. talo vs. talo = “house”). The results showed that hyphenation at syllable boundaries slows down reading of first and second graders even though syllabification by hyphens is very common in Finnish reading instruction, as all first-grade textbooks include hyphens at syllable boundaries. When hyphens were positioned within a syllable (t-alo vs. ta-lo), beginning readers were even more disrupted. Alternate colouring did not affect reading speed, no matter whether colours signalled syllable structure or not. The results show that beginning Finnish readers prefer to process polysyllabic words via syllables rather than letter by letter. At the same time they imply that hyphenation encourages sequential syllable processing, which slows down the reading of children, who are already capable of parallel syllable processing or recognising words directly via the whole-word route.  相似文献   

7.
Syllable structure influences hearing students' reading and spelling (e.g., Badecker, 1996; Caramazza & Miceli, 1990; Prinzmetal, Treiman, & Rho, 1986; Rapp, 1992; Treiman & Zukowski, 1988). This may seem unsurprising since hearers closely associate written and spoken words. We analysed a corpus of spelling errors made by deaf students. They would have learned English orthography with an attenuated experience of speech. Wefound that the majority of their errors were phonologically implausible but orthographically legal. A tendency to replace uncommon letter sequences with common sequences could not account for this pattern, nor could residual influence from speech. Since syllabically defined constraints are required to keep sequences orthographically legal, the deaf data are marked by an influence of syllable structure. Two main conclusions follow: (1) Our results contribute to evidence that abstract constraints, not derived from peripheral speech or hearing mechanisms, govern the organization of linguistic knowledge; and (2) statistical redundancy could not explain the deaf results. It does not offer a general alternative to suprasegmental structure.  相似文献   

8.
The assumptions tested were that the relative contribution of each hemisphere to reading alters with experience and that experience increases suppression of the simultaneous use of identical strategies by the non-dominant hemisphere. Males that were reading disabled and phonologically impaired, reading disabled and phonologically normal, or with no reading disability were presented familiar words, orthographically correct pseudowords, and orthographically incorrect non-words for lexical decision. Accuracy and response times in all groups showed a shift from no asymmetry in processing non-words to a stable left hemisphere advantage and clear suppression of the right hemisphere in processing words. In the pseudoword condition, accuracy scores were higher when both hemispheres were free to engage, especially in those with a reading disability and responses slowed in the phonologically impaired group but not the phonologically normal groups when the right hemisphere was disengaged. As familiar words typically invoke lexical processing by both hemispheres while pseudowords invoke lexical processing by the right and non-lexical processing by the left hemisphere, and as non-lexical processing is weak in the phonologically impaired, the results support the assumptions that were tested.  相似文献   

9.
Repeated and orthographically similar words are vulnerable in RSVP, as observed using the repetition blindness (RB) paradigm. Prior researchers have claimed that RB is increased for emotion words, but the mechanism for this was unclear. We argued that RB should be reduced for words with properties that capture attention, such as emotion words. Employing orthographic repetition blindness, our data showed that words with negative emotional valence had a report advantage when they were the second of two similar words (e.g., less RB occurred with HORSE curse than with HORSE purse). This renders emotion RB similar to the use of emotion words in the attentional blink phenomenon. The findings demonstrate the neglected role of competition in conscious recognition of multiple words under conditions of brief display and masking.  相似文献   

10.
Four experiments were conducted to determine whether semantic feedback spreads to orthographic and/or phonological representations during visual word recognition and whether such feedback occurs automatically. Three types of prime-target word pairs were used within the mediated-priming paradigm: (1) homophonically mediated (e.g., frog-[toad]-towed), (2) orthographically mediated (e.g., frog-[toad]-told), and (3) associatively related (e.g., frog-toad). Using both brief (53 msec; Experiment 1) and long (413 msec; Experiment 3) prime exposure durations, significant facilitatory-priming effects were found in the response time data with orthographically, but not homophonically, mediated prime-target word pairs. When the prime exposure duration was shortened to 33 msec in Experiment 4, however, facilitatory priming was absent with both orthographically and homophonically mediated word pairs. In addition, with a brief (53-msec) prime exposure duration, direct-priming effects were found with associatively (e.g., frog-toad), orthographically (e.g., toad-told), and homophonically (e.g., toad-towed) related word pairs in Experiment 2. Taken together, these results indicate that following the initial activation of semantic representations, activation automatically feeds back to orthographic, but not phonological, representations during the early stages of word processing. These findings were discussed in the context of current accounts of visual word recognition.  相似文献   

11.
Neurobiological models of reading account for two ways in which orthography is converted to phonology: (1) familiar words, particularly those with exceptional spelling-sound mappings (e.g., shoe) access their whole-word lexical representations in the ventral visual stream, and (2) orthographically unfamiliar words, particularly those with regular spelling-sound mappings (i.e., pseudohomophones [PHs], which are orthographically novel but sound like real words; e.g., shue) are phonetically decoded via sublexical processing in the dorsal visual stream. The present study used a naming task in order to compare naming reaction time (RT) and response duration (RD) of exception and regular words to their PH counterparts. We replicated our earlier findings with words, and extended them to PH phonetic decoding by showing a similar effect on RT and RD of matched PHs. Given that the shorter RDs for exception words can be attributed to the benefit of whole-word processing in the orthographic word system, and the longer RTs for exception words to the conflict with phonetic decoding, our PH results demonstrate that phonetic decoding also involves top-down feedback from phonological lexical representations (e.g., activated by shue) to the orthographic representations of the corresponding correct word (e.g., shoe). Two computational models were tested for their ability to account for these effects: the DRC and the CDP+. The CDP+ fared best as it was capable of simulating both the regularity and stimulus type effect on RT for both word and PH identification, although not their over-additive interaction. Our results demonstrate that both lexical reading and phonetic decoding elicit a regularity dissociation between RT and RD that provides important constraints to all models of reading, and that phonetic decoding results in top-down feedback that bolsters the orthographic lexical reading process.  相似文献   

12.
The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.  相似文献   

13.
When asked to ‘find three forks’, adult speakers of English use the noun ‘fork’ to identify units for counting. However, when number words (e.g. three) and quantifiers (e.g. more, every) are used with unfamiliar words (‘Give me three blickets’) noun‐specific conceptual criteria are unavailable for picking out units. This poses a problem for young children learning language, who begin to use quantifiers and number words by age 2, despite knowing a relatively small number of nouns. Without knowing how individual nouns pick out units of quantification – e.g. what counts as a blicket– how could children decide whether there are three blickets or four? Three experiments suggest that children might solve this problem by assigning ‘default units’ of quantification to number words, quantifiers, and number morphology. When shown objects that are broken into arbitrary pieces, 4‐year‐olds in Experiment 1 treated pieces as units when counting, interpreting quantifiers, and when using singular–plural morphology. Experiment 2 found that although children treat object‐hood as sufficient for quantification, it is not necessary. Also sufficient for individuation are the criteria provided by known nouns. When two nameable things were glued together (e.g. two cups), children counted the glued things as two. However, when two arbitrary pieces of an object were put together (e.g. two parts of a ball), children counted them as one, even if they had previously counted the pieces as two. Experiment 3 found that when the pieces of broken things were nameable (e.g. wheels of a bicycle), 4‐year‐olds did not include them in counts of whole objects (e.g. bicycles). We discuss the role of default units in early language acquisition, their origin in acquisition, and how children eventually acquire an adult semantics identifying units of quantification.  相似文献   

14.
The extent to which orthographic and phonological processes are available during the initial moments of word recognition within each hemisphere is under specified, particularly for the right hemisphere. Few studies have investigated whether each hemisphere uses orthography and phonology under constraints that restrict the viewing time of words and reduce overt phonological demands. The current study used backward masking in the divided visual field paradigm to explore hemisphere differences in the availability of orthographic and phonological word recognition processes. A 20 ms and 60 ms SOA were used to track the time course of how these processes develop during pre-lexical moments of word recognition. Nonword masks varied in similarity to the target words such that there were four types: orthographically and phonologically similar, orthographically but not phonologically similar, phonologically but not orthographically similar and unrelated. The results showed the left hemisphere has access to both orthography and phonology early in the word recognition process. With more time to process the stimulus, the left hemisphere is able to use phonology which benefits word recognition to a larger extent than orthography. The right hemisphere also demonstrates access to both orthography and phonology in the initial moments of word recognition, however, orthographic similarity improves word recognition to a greater extent than phonological similarity.  相似文献   

15.
A discrete-trials color naming (Stroop)’paradigm was used to examine activation along orthographic and phonological dimensions in visual and auditory word recognition. Subjects were presented a prime word, either auditorily or visually, followed 200 msec later by a target word printed in a color. The orthographic and phonological similarity of prime-target pairs varied. Color naming latencies were longer when the primes and targets were orthographically and/or phonologically similar than when they were unrelated. This result obtained for both prime presentation modes. The results suggest that word recognition entails activation of multiple codes and priming of orthographically and phonologically similar words.  相似文献   

16.
The orthographic analogy effect for rime-based analogies has been debated, and theoretical arguments relating to the role of rhyme awareness in reading development have been questioned. This study assessed whether children beginning to read are able to make genuine orthographic analogies based on rime similarity. A non-word version of the clue word task was used to compare children’s performance at reading orthographically and phonologically similar target items and phonologically similar items only. They were also assessed on their ability to make analogies between the beginnings and endings of words. The results were consistent with the suggestion that orthographic analogy use is available to beginning readers as a reading strategy, and that rime-based analogies are easier to make than analogies at the beginning of words. However, rhyme awareness was found to account for variance in orthographic analogy use between the beginnings of words, but not for rime-based analogies. The implications of this for the theoretical role of rhyme awareness in reading development are discussed.  相似文献   

17.
Previous studies have suggested that French listeners experience difficulties when they have to discriminate between words that differ in stress. A limitation is that these studies used stress patterns that do not respect the rules of stress placement in French. In this study, three stress patterns were tested on bisyllabic words (1) the legal stress pattern in French, namely words that were unstressed compared to words that bore primary stress on their last syllable (/?u?i/-/?u’?i/), (2) an illegal stress location pattern, namely words that bore primary stress on their first syllable compared to words that bore primary stress on their last syllable (/’?u?i/-/?u’?i/) and (3) an illegal pattern that involves an unstressed word, namely words that were unstressed compared to words that bore primary stress on their first syllable (/?u?i/-/’?u?i/). In an ABX task, participants heard three items produced by three different speakers and had to indicate whether X was identical to A or B. The stimuli A and B varied in stress (/?u’?i/-/?u?i/-/?u’?i/), in one phoneme (/?u’?i/-/?u’???/-/?u’?i/) or in both stress and one phoneme (/?u’?i/-/?u???/-/?u’?i/). The results showed that French listeners are fully able to discriminate between two words differing in stress provided that the stress pattern included an unstressed word. More importantly, they suggest that the French listeners’ difficulties mainly reside in locating stress within words.  相似文献   

18.
This experiment investigates whether the influence of spelling-to-sound correspondence on lexical decision may be due to the visual characteristics of irregular words rather than to irregularities in their phonological correspondence. Lexical decision times to three types of word were measured: words with both irregular orthography and spelling-to-sound correspondence (e.g., GHOUL, CHAOS), words with regular orthography but irregular spelling-to-sound correspondence (e.g., GROSS, LEVER), and words regular in both respects (e.g., SHACK, PLUG). Items were presented in upper- and lowercase in order to examine the influence of “word shape” on any irregularity effects obtained. The results showed that irregular words with regular orthographies were identified more slowly than regular words in both upper- and lowercase. Words that are both orthographically and phonologically irregular were identified much more slowly with lowercase presentation. However, with uppercase, the lexical decision time for these items did not differ significantly from those of regular words. These data indicate that previous demonstrations of the regularity effect in lexical decision were not due to the unusual visual characteristics of some of the words sampled. In addition, the data emphasize the role of word shape in word recognition and also suggest that words with unique orthographies may be processed differently from those whose orthography is similar to other words.  相似文献   

19.
The syllable and the morpheme are known to be important linguistic variables, but is such information involved in the early stages of word recognition? Syllable‐morpheme information was manipulated in the early stage of word naming by means of the fast priming paradigm. The letters in the prime were printed in a mixture of lower‐ and upper‐case letters. The change from lower to upper case occurred either at a syllable‐morpheme boundary, before the boundary, or after it (e.g., reTAKE, rETAKE, or retAKE) creating either an intact pair or a broken one. The target was always in lower case (e.g., retake). The results of Experiments 1 and 2 revealed that intact syllable and morpheme information facilitated word naming at a short Stimulus Onset Asynchrony (below awareness) but not at a long SOA, suggesting that the use of such information is automatic. A second set of experiments attempted to determine if syllable information alone could facilitate word processing. In Experiments 3 and 4, monomorphemic words were divided either at, before, or after the syllable boundary (e.g., rePEL, rEPEL, or repEL). The primes were all pseudomorphemic in the sense that the initial syllables could appear as a morpheme in other words (e.g., restate) but were not morphemic in the target words (e.g., repel). The second syllable was neither morphemic nor pseudomorphemic. Using the same SOAs as in Experiments 1 and 2, intact syllables were found to be facilitative at the short SOA, but not at the long SOA. Thus, the syllable plays a role in an early stage of word recognition. Whether morphemes that are not syllables are facilitative is still to be determined in this paradigm.  相似文献   

20.
This study was inspired by the rise in television targeting toddlers and preverbal infants (e.g., Teletubbies, Baby Mozart). Overall, we investigated if very young children who are in the early stages of language acquisition can learn vocabulary quickly (fast map) from television programs. Using a fast mapping paradigm, this study examined a group (n = 48) of toddlers (15–24 months) and their ability to learn novel words. Utilizing a repeated measures design, we compared children's ability to learn various novel words in 5 different conditions. These included the presentation and identification of a novel word by an adult speaker via live presentation when the toddler was attending (i.e., joint reference), an adult via live presentation when the toddler was not attending, an adult speaker on television, and an edited clip from a children's television program (Teletubbies). Overall, the toddlers were most successful in learning novel words in the joint reference condition. They were significantly less successful in the children's program condition. Furthermore, there was a significant interaction between age and condition on children's performance. Both younger (15–21 months) and older (22–24 months) participants identified the target objects when they were taught the novel word by an adult speaker; however, it appeared that children under the age of 22 months did not identify the target item when they were taught the novel word via the television program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号