首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous research has suggested that the initial portion of a word activates similar sounding words that compete for recognition. Other research has shown that the number of similar sounding words that are activated influences the speed and accuracy of recognition. Words with few neighbors are processed more quickly and accurately than words with many neighbors. The influences of the number of lexical competitors in the initial part of the word were examined in a shadowing and a lexical-decision task. Target words with few neighbors that share the initial phoneme were responded to more quickly than target words with many neighbors that share the initial phoneme. The implications of onset-density effects for models of spoken-word recognition are discussed.  相似文献   

2.
Network science provides a new way to look at old questions in cognitive science by examining the structure of a complex system, and how that structure might influence processing. In the context of psycholinguistics, clustering coefficient-a common measure in network science-refers to the extent to which phonological neighbors of a target word are also neighbors of each other. The influence of the clustering coefficient on spoken word production was examined in a corpus of speech errors and a picture-naming task. Speech errors tended to occur in words with many interconnected neighbors (i.e., higher clustering coefficient). Also, pictures representing words with many interconnected neighbors (i.e., high clustering coefficient) were named more slowly than pictures representing words with few interconnected neighbors (i.e., low clustering coefficient). These findings suggest that the structure of the lexicon influences the process of lexical access during spoken word production.  相似文献   

3.
Three recognition memory experiments examined phonemic similarity and false recognition under conditions of divided attention. The manipulation was presumed to have little effect on automatic, perceptual influences of memory. Prior research demonstrated that false recognition of a test word (e.g., discrepancy) was higher if the study list included a nonword derived from the future test word by changing a phoneme near the end of the item (e.g., discrepan/l/y) relative to an early phoneme change (e.g., /l/iscrepancy). The difference has been attributed to automatic, implicit activation of test words during prerecognition processing of related nonwords. Three experiments demonstrated that the late-change condition also contributed to higher false recognition rates with divided attention at encoding. Dividing attention disrupted recognition memory of studied words in Experiments 1 and 3. Results are discussed in terms of their relevance for an interpretation emphasizing the automatic, implicit activation of candidate words that occurs in the course of identifying spoken words and nonwords.  相似文献   

4.
The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally spread across two letter positions (e.g.,pelle/celle-selle-telle-perle), or were equally spread across two letter positions (e.g.,litre/titre-vitre-libre-livre). Predictions based on the interactive activation model [McClelland & Rumelhart (1981). Psychological Review, 88, 375–401] were generated by running simulations and were confirmed in the lexical decision task. Data showed that words were more rapidly identified when they had spread neighbors rather than concentrated neighbors. Furthermore, within the set of spread neighbors, words were more rapidly recognized when they had equally rather than unequally spread neighbors. The findings are explained in terms of activation and inhibition processes in the interactive activation framework.  相似文献   

5.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway.  相似文献   

6.
The present experiments were designed to determine whether memory for the voice in which a word is spoken is retained in a memory system that is separate from episodic memory or, instead, whether episodic memory represents both word and voice information. These two positions were evaluated by assessing the effects of study-to-test changes in voice on recognition memory after a variety of encoding tasks that varied in processing requirements. In three experiments, the subjects studied a list of words produced by six voices. The voice in which the word was spoken during a subsequent explicit recognition test was either the same as or different from the voice used in the study phase. The results showed that word recognition was affected by changes in voice after each encoding condition and that the magnitude of the voice effect was unaffected by the type of encoding task. The results suggest that spoken words are represented in long-term memory as episodic traces that contain talker-specific perceptual information.  相似文献   

7.
In visual word recognition, words with many phonological neighbours are processed more rapidly than are those with few neighbours. The research reported here tested whether the distribution of phonological neighbours across phoneme positions influences lexical decisions. The results indicate that participants responded more rapidly to words where all phoneme positions can be changed to form a neighbour than they did to those where only a limited number of phoneme positions can be changed to form a neighbour. It is argued that this distribution effect arises because of differences between the two groups of words in how they overlap with their neighbours.  相似文献   

8.
Previous research has demonstrated that the number and frequency of lexical neighbors affects the perception of individual sounds within a nonword in a phoneme identification task. In the present research, the issue of what items should be considered part of a word's neighborhood was explored. These experiments, in which both lexical decision and phoneme identification tasks were used, demonstrate that lexical neighborhood effects are not limited to words that match the target item syllable initially (the cohort). Words that differ from a target only in their first phoneme influence the process of lexical access. This argues against the notion that word onsets serve a unique or special purpose in word recognition.  相似文献   

9.
Previous studies have reported that semantic richness facilitates visual word recognition (see, e.g., Buchanan, Westbury, & Burgess, 2001; Pexman, Holyk, & Monfils, 2003). We compared three semantic richness measures--number of semantic neighbors (NSN), the number of words appearing in similar lexical contexts; number of features (NF), the number of features listed for a word's referent; and contextual dispersion (CD), the distribution of a word's occurrences across content areas-to determine their abilities to account for response time and error variance in lexical decision and semantic categorization tasks. NF and CD accounted for unique variance in both tasks, whereas NSN accounted for unique variance only in the lexical decision task. Moreover, each measure showed a different pattern of relative contribution across the tasks. Our results provide new clues about how words are represented and suggest that word recognition models need to accommodate each of these influences.  相似文献   

10.
Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker’s voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.  相似文献   

11.
The orthographic neighborhood size (N) of a word—the number of words that can be formed from that word by replacing one letter with another in its place—has been found to have facilitatory effects in word naming. The orthographic neighborhood hypothesis attributes this facilitation to interactive effects. A phonographic neighborhood hypothesis, in contrast, attributes the effect to lexical print-sound conversion. According to the phonographic neighborhood hypothesis, phonographic neighbors (words differing in one letter and one phoneme, e.g., stove and stone) should facilitate naming, and other orthographic neighbors (e.g., stove and shove) should not. The predictions of these two hypotheses are tested. Unique facilitatory phonographic N effects were found in four sets of word naming mega-study data, along with an absence of facilitatory orthographic N effects. These results implicate print-sound conversion—based on consistent phonology—in neighborhood effects rather than word-letter feedback.  相似文献   

12.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

13.
Visual word recognition is facilitated by the presence of orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not consider where neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number of enemies at each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over-time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.  相似文献   

14.
This paper presents an analysis of the distribution of phonological similarity relations among monosyllabic spoken words in English. It differs from classical analyses of phonological neighborhood density (e.g., Luce &; Pisoni, 1998) by assuming that not all phonological neighbors are equal. Rather, it is assumed that the phonological lexicon has psycholinguistic structure. Accordingly, in addition to considering thenumber of phonological neighbors for any given word, it becomes important to consider thenature of these neighbors. If one type of neighbor is more dominant, neighborhood density effects may reflect levels of segmental representation other than the phoneme, particularly prior to literacy. Statistical analyses of the nature of phonological neighborhoods in terms ofrime neighbors (e.g.,hat/cat),consonant neighbors (e.g.,hat/hit), andlead neighbors (e.g.,hat/ham) were thus performed for all monosyllabic words in the Celex corpus (4,086 words). Our results show that most phonological neighbors are rime neighbors (e.g.,hat/cat) in English. Similar patterns were found when a corpus of words for which age-of-acquisition ratings were available was analyzed. The resultant database can be used as a tool for controlling and selecting stimuli when the role of lexical neighborhoods in phonological development and speech processing is examined.  相似文献   

15.
The aim of the present study was to investigate the effects of aging on both spoken and written word production by using analogous tasks. To do so, a phonological neighbor generation task (Experiment 1) and an orthographic neighbor generation task (Experiment 2) were designed. In both tasks, young and older participants were given a word and had to generate as many words as they could think of by changing one phoneme in the target word (Experiment 1) or one letter in the target word (Experiment 2). The data of the two experiments were consistent, showing that the older adults generated fewer lexical neighbors and made more errors than the young adults. For both groups, the number of words produced, as well as their lexical frequency, decreased as a function of time. These data strongly support the assumption of a symmetrical age-related decline in the transmission of activation within the phonological and orthographic systems.  相似文献   

16.
Computational modeling and eye‐tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted‐U‐shaped pattern with a high phonological density advantage at an intermediate level of semantic input—in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects.  相似文献   

17.
Mark Yates 《Cognition》2010,115(1):197-201
The least supported phoneme refers to the phoneme position within a word with which the fewest phonological neighbors overlap. Recently, it has been argued that the number of neighbors coinciding with the least supported phoneme is a critical determinant of pronunciation latencies. The current research tested this claim by comparing naming latencies to words that differed in terms of the number of neighbors overlapping with their least supported phoneme. The results revealed that words where many neighbors overlapped were named more rapidly than those where few neighbors overlapped. These results are explained using the dual-route cascaded model of reading aloud.  相似文献   

18.
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top‐down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom‐up acoustic‐phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still‐developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single‐speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.  相似文献   

19.
Three experiments examined the role of three distinctive perceptual factors in recognition and recall memory. Using a subject-paced presentation rate, the first two experiments (recognition and recall) examined (1) the number of phonological-to-orthographic neighbors, (2) phonological-to-orthographic consistency, and (3) orthographic-to-phonological consistency. The third experiment (recall) reexamined the number of phonological-to-orthographic neighbors, using an experimenter-paced presentation rate of 2 sec per item. In both recognition and recall memory tasks, the number of phonological-to-orthographic neighbors influenced memory performance, whereas the two types of consistency did not. The results indicate that having fewer phonological-to-orthographic neighbors (i.e., having distinct mappings between orthography and phonology, and between phonology and orthography, e.g., pulp) relieve words from interference in episodic memory tests for such words. Furthermore, words that are indistinct in terms of these mappings (e.g., tuck) are subject to interference from words with similar representations (e.g., luck, buck, stuck), and this weakens the memory trace for a particular word.  相似文献   

20.
Cross-modal semantic priming and phoneme monitoring experiments investigated processing of word-final nonreleased stop consonants (e.g., kit may be pronounced /kit/ or /ki/), which are common phonological variants in American English. Both voiced /d/ and voiceless /t/ segments were presented in release and no-release versions. A cross-modal semantic priming task (Experiment 1) showed comparable priming for /d/ and /t/ versions. A second set of stimuli ending in /s/ were presented as intact, missing /s/, or with a mismatching final segment and showed significant but reduced priming for the latter two conditions. Experiment 2 showed that phoneme monitoring reaction time for release and no-release words and onset mismatching stimuli (derived pseudowords) increased as acoustic-phonetic similarity to the intended word decreased. The results suggest that spoken word recognition does not require special mechanisms for processing no-release variants. Rather, the results can be accounted for by means of existing assumptions concerning probabilistic activation that is based on partial activation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号