首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Variability in talker identity and speaking rate, commonly referred to as indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. The present study examines the time course of indexical specificity effects to evaluate the hypothesis that such effects occur relatively late in the perceptual processing of spoken words. In 3 long-term repetition priming experiments, the authors examined reaction times to targets that were primed by stimuli that matched or mismatched on the indexical variable of interest (either talker identity or speaking rate). Each experiment was designed to manipulate the speed with which participants processed the stimuli. The results demonstrate that indexical variability affects participants' perception of spoken words only when processing is relatively slow and effortful.  相似文献   

2.
A series of experiments was conducted to investigate the effects of stimulus variability on the memory representations for spoken words. A serial recall task was used to study the effects of changes in speaking rate, talker variability, and overall amplitude on the initial encoding, rehearsal, and recall of lists of spoken words. Interstimulus interval (ISI) was manipulated to determine the time course and nature of processing. The results indicated that at short ISIs, variations in both talker and speaking rate imposed a processing cost that was reflected in poorer serial recall for the primacy portion of word lists. At longer ISIs, however, variation in talker characteristics resulted in improved recall in initial list positions, whereas variation in speaking rate had no effect on recall performance. Amplitude variability had no effect on serial recall across all ISIs. Taken together, these results suggest that encoding of stimulus dimensions such as talker characteristics, speaking rate, and overall amplitude may be the result of distinct perceptual operations. The effects of these sources of stimulus variability in speech are discussed with regard to perceptual saliency, processing demands, and memory representation for spoken words.  相似文献   

3.
Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from the visual domain. For example, in visual word recognition, changes in font affect processing differently depending on which hemisphere initially processes the input. The present study examined whether hemispheric differences exist in spoken language as well. In 4 long-term repetition-priming experiments, the authors examined responses to stimuli that were primed by stimuli that matched or mismatched in talker identity. The results demonstrate that indexical variability can affect participants' perception of spoken words differently in the 2 hemispheres.  相似文献   

4.
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.  相似文献   

5.
The influence of coarticulation cues on spoken word recognition is not yet well understood. This acoustic/phonetic variation may be processed early and recognized as sensory noise to be stripped away, or it may influence processing at a later prelexical stage. The present study used event-related potentials (ERPs) in a picture/spoken word matching paradigm to examine the temporal dynamics of stimuli systematically violating expectations at three levels: entire word (lexical), initial phoneme (phonemic), or in coarticulation cues contained in the initial phoneme (subphonemic). We found that both coarticulatory and phonemic mismatches resulted in increased negativity in the N280, interpreted as indexing prelexical processing of subphonemic information. Further analyses revealed that the point of uniqueness differentially modulated subsequent early or late negativity depending on whether the first or second segment matched expectations, respectively. Finally, it was found that word-level but not coarticulatory mismatches modulated the later-going N400 component, indicating that subphonemic information does not influence word-level selection provided no lexical change has occurred. The results indicate that acoustic/phonetic variation resulting from coarticulation is preserved in and influences spoken word recognition as it becomes available, particularly during prelexical processing.  相似文献   

6.
Although the word‐frequency effect is one of the most established findings in spoken‐word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event‐related potentials (ERPs) to track the time course of the word‐frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken‐word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed.  相似文献   

7.
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.  相似文献   

8.
Previous research indicates that mental representations of word meanings are distributed along both semantic and syntactic dimensions such that nouns and verbs are relatively distinct from one another. Two experiments examined the effect of representational distance between meanings on recognition of ambiguous spoken words by comparing recognition of unambiguous words, noun–verb homonyms, and noun–noun homonyms. In Experiment 1, auditory lexical decision was fastest for unambiguous words, slower for noun–verb homonyms, and slowest for noun–noun homonyms. In Experiment 2, response times for matching spoken words to pictures followed the same pattern and eye fixation time courses revealed converging, gradual time course differences between conditions. These results indicate greater competition between meanings of ambiguous words when the meanings are from the same grammatical class (noun–noun homonyms) than when they are from different grammatical classes (noun–verb homonyms).  相似文献   

9.
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None of the existing approaches were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment.  相似文献   

10.
Computational modeling and eye‐tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted‐U‐shaped pattern with a high phonological density advantage at an intermediate level of semantic input—in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects.  相似文献   

11.
Understanding the circumstances under which talker (and other types of) variability affects language perception represents an important area of research in the field of spoken word recognition. Previous work has demonstrated that talker effects are more likely when processing is relatively slow (McLennan & Luce, 2005). Given that listeners may take longer to process foreign-accented speech than native-accented speech (Munro & Derwing, Language and Speech, 38, 289?C306 1995), talker effects should be more likely when listeners are presented with words spoken in a foreign accent than when they are presented with those same words spoken in a native accent. The results of two experiments, conducted in two different countries and in two different languages, are consistent with this prediction.  相似文献   

12.
The neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998) of spoken word recognition was applied to the problem of predicting accuracy of visual spoken word identification. One hundred fifty-three spoken consonant-vowel-consonant words were identified by a group of 12 college-educated adults with normal hearing and a group of 12 college-educated deaf adults. In both groups, item identification accuracy was correlated with the computed NAM output values. Analysis of subsets of the stimulus set demonstrated that when stimulus intelligibility was controlled, words with fewer neighbors were easier to identify than words with many neighbors. However, when neighborhood density was controlled, variation in segmental intelligibility was minimally related to identification accuracy. The present study provides evidence of a common spoken word recognition system for both auditory and visual speech that retains sensitivity to the phonetic properties of the input.  相似文献   

13.
Singh L 《Cognition》2008,106(2):833-870
Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural speech is replete with variability, only some of which determines the meaning of a word, it remains unclear how infants might ever overcome the effects of surface variability without appealing to meaning. In the current set of experiments, consequences of high and low variability are examined in preverbal infants. The source of variability, vocal affect, is a common property of infant-directed speech with which young learners have to contend. Across a series of four experiments, infants' abilities to recognize repeated encounters of words, as well as to reject similar-sounding words, are investigated in the context of high and low affective variation. Results point to positive consequences of affective variation, both in creating generalizable memory representations for words, but also in establishing phonologically precise memories for words. Conversely, low variability appears to degrade word recognition on both fronts, compromising infants' abilities to generalize across different affective forms of a word and to detect similar-sounding items. Findings are discussed in the context of principles of categorization that may potentiate the early growth of a lexicon.  相似文献   

14.
郑志伟  黄贤军  张钦 《心理学报》2013,45(4):427-437
采用韵律/词汇干扰范式和延迟匹配任务, 通过两个ERP实验, 考察了汉语口语中情绪韵律能否、以及如何调节情绪词的识别。实验一中, 不同类型的情绪韵律分组呈现, ERP结果显示, 同与情绪韵律效价一致的情绪词相比, 与情绪韵律效价不一致的情绪词诱发了走向更负的P200、N300和N400成分; 实验二中, 不同类型的情绪韵律随机呈现, 上述效价一致性效应依然存在。实验结果表明, 情绪韵律能够调节情绪词识别, 主要表现在对情绪词的音韵编码和语义加工的双重易化上。  相似文献   

15.
Auditory recognition without identification   总被引:1,自引:0,他引:1  
When visual recognition test items are unidentifiable--through fragmentation, for example--participants can discriminate between unidentifiable items that were presented recently and those that were not. The present study extends this recognition without identification phenomenon to the auditory modality. In several experiments, participants listened to words and were then presented with spoken recognition test items that were embedded in white noise. Participants attempted to identify each spoken word through the white noise, then rated the likelihood that the word was studied. Auditory recognition without identification was found: Participants discriminated between studied and unstudied words in the absence of an ability to identify them through white noise, even when the voice changed from male to female and when the study list was presented visually. The effect was also found when identification was hindered through the isolation of particular phonemes, suggesting that phoneme information may be present in memory traces for recently spoken words.  相似文献   

16.
This study used event-related potentials (ERPs) to examine whether we employ the same normalisation mechanisms when processing words spoken with a regional accent or foreign accent. Our results showed that the Phonological Mapping Negativity (PMN) following the onset of the final word of sentences spoken with an unfamiliar regional accent was greater than for those produced in the listener’s own accent, whilst PMN for foreign accented speech was reduced. Foreign accents also resulted in a reduction in N400 amplitude when compared to both unfamiliar regional accents and the listener’s own accent, with no significant difference found between the N400 of the regional and home accents. These results suggest that regional accent related variations are normalised at the earliest stages of spoken word recognition, requiring less top-down lexical intervention than foreign accents.  相似文献   

17.
The effects of perceptual adjustments to voice information on the perception of isolated spoken words were examined. In two experiments, spoken target words were preceded or followed within a trial by a neutral word spoken in the same voice or in a different voice as the target. Over-all, words were reproduced more accurately on trials on which the voice of the neutral word matched the voice of the spoken target word, suggesting that perceptual adjustments to voice interfere with word processing. This result, however, was mediated by selective attention to voice. The results provide further evidence of a close processing relationship between perceptual adjustments to voice and spoken word recognition.  相似文献   

18.
Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker’s voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.  相似文献   

19.
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.  相似文献   

20.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号