首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.  相似文献   

2.
In this research, we combine a cross-form word–picture visual masked priming procedure with an internal phoneme monitoring task to examine repetition priming effects. In this paradigm, participants have to respond to pictures whose names begin with a prespecified target phoneme. This task unambiguously requires retrieving the word-form of the target picture's name and implicitly orients participants' attention towards a phonological level of representation. The experiments were conducted within Spanish, whose highly transparent orthography presumably promotes fast and automatic phonological recoding of subliminal, masked visual word primes. Experiments 1 and 2 show that repetition primes speed up internal phoneme monitoring in the target, compared to primes beginning with a different phoneme from the target, or sharing only their first phoneme with the target. This suggests that repetition primes preactivate the phonological code of the entire target picture's name, hereby speeding up internal monitoring, which is necessarily based on such a code. To further qualify the nature of the phonological code underlying internal phoneme monitoring, a concurrent articulation task was used in Experiment 3. This task did not affect the repetition priming effect. We propose that internal phoneme monitoring is based on an abstract phonological code, prior to its translation into articulation.  相似文献   

3.
Five experiments were conducted to investigate how subsyllabic, syllabic, and prosodic information is processed in Cantonese monosyllabic word production. A picture-word interference task was used in which a target picture and a distractor word were presented simultaneously or sequentially. In the first 3 experiments with visually presented distractors, null effects on naming latencies were found when the distractor and the picture name shared the onset, the rhyme, the tone, or both the onset and tone. However, significant facilitation effects were obtained when the target and the distractor shared the rhyme + tone (Experiment 2), the segmental syllable (Experiment 3), or the syllable + tone (Experiment 3). Similar results were found in Experiments 4 and 5 with spoken rather than visual distractors. Moreover, a significant facilitation effect was observed in the rhyme-related condition in Experiment 5, and this effect was not affected by the degree of phonological overlap between the target and the distractor. These results are interpreted in an interactive model, which allows feedback sending from the subsyllabic to the lexical level during the phonological encoding stage in Cantonese word production.  相似文献   

4.
Research into the learning of Second Language (SL) vocabulary by beginning learners has indicated that the simultaneous presentation of the First and Second Language words results in blocking of the learning process by the familiar First Language (FL) word. Previous research also suggests that blocking by the First Language can be eliminated by bringing it in as informative feedback (either as a written or spoken word). Our experiments were designed to further extend this research. The use of a picture, either as feedback or simultaneously presented with its equivalent, along with the aural feedback and the conventional procedures were investigated in Experiment 1. Results revealed that pictures blocked the learning process less than the written FL word when both were presented with their SL referent. When used as feedback, however, pictures were not as good as the spoken FL word. Experiment 2 demonstrated that aural feedback was the best type of feedback when compared with the picture and written FL presentations, and that the picture feedback was better than the written feedback. Taken together, the results of these two studies showed that all forms of feedback overcame the problem of blocking created by simultaneous presentation of the FL and SL words, and that aural feedback was the most effective feedback procedure. It was suggested that the superiority of aural feedback was likely to be a consequence of the use of a different input channel to that of the visually presented written SL word.  相似文献   

5.
The influence of coarticulation cues on spoken word recognition is not yet well understood. This acoustic/phonetic variation may be processed early and recognized as sensory noise to be stripped away, or it may influence processing at a later prelexical stage. The present study used event-related potentials (ERPs) in a picture/spoken word matching paradigm to examine the temporal dynamics of stimuli systematically violating expectations at three levels: entire word (lexical), initial phoneme (phonemic), or in coarticulation cues contained in the initial phoneme (subphonemic). We found that both coarticulatory and phonemic mismatches resulted in increased negativity in the N280, interpreted as indexing prelexical processing of subphonemic information. Further analyses revealed that the point of uniqueness differentially modulated subsequent early or late negativity depending on whether the first or second segment matched expectations, respectively. Finally, it was found that word-level but not coarticulatory mismatches modulated the later-going N400 component, indicating that subphonemic information does not influence word-level selection provided no lexical change has occurred. The results indicate that acoustic/phonetic variation resulting from coarticulation is preserved in and influences spoken word recognition as it becomes available, particularly during prelexical processing.  相似文献   

6.
Four experiments tested whether and how initially planned but then abandoned speech can influence the production of a subsequent resumption. Participants named initial pictures, which were sometimes suddenly replaced by target pictures that were related in meaning or word form or were unrelated. They then had to stop and resume with the name of the target picture. Target picture naming latencies were measured separately for trials in which the initial speech was skipped, interrupted, or completed. Semantically related initial pictures helped the production of the target word, although the effect dissipated once the utterance of the initial picture name had been completed. In contrast, phonologically related initial pictures hindered the production of the target word, but only for trials in which the name of the initial picture had at least partly been uttered. This semantic facilitation and phonological interference did not depend on the time interval between the initial and target picture, which was either varied between 200 ms and 400 ms (Experiments 1-2) or was kept constant at 300 ms (Experiments 3-4). We discuss the implications of these results for models of speech self-monitoring and for models of problem-free word production.  相似文献   

7.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   

8.
We examined phonological priming in illiterate adults, using a cross-modal picture-word interference task. Participants named pictures while hearing distractor words at different Stimulus Onset Asynchronies (SOAs). Ex-illiterates and university students were also tested. We specifically assessed the ability of the three populations to use fine-grained, phonemic units in phonological encoding of spoken words. In the phoneme-related condition, auditory words shared only the first phoneme with the target name. All participants named pictures faster with phoneme-related word distractors than with unrelated word distractors. The results thus show that phonemic representations intervene in phonological output processes independently of literacy. However, the phonemic priming effect was observed at a later SOA in illiterates compared to both ex-illiterates and university students. This may be attributed to differences in speed of picture identification.  相似文献   

9.
Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).  相似文献   

10.
Two picture naming experiments, in which an initial picture was occasionally replaced with another (target) picture, were conducted to study the temporal coordination of abandoning one word and resuming with another word in speech production. In Experiment 1, participants abandoned saying the initial name, and resumed with the name of the target picture. This triggered both interrupted (e.g., Mush- …scooter) and completed (mushroom …scooter) productions of the initial name. We found that the time from beginning naming the initial picture to ending it was longer when the target picture was visually degraded than when it was intact. In Experiment 2, participants abandoned saying the initial name, but without resuming. There was no visual degradation effect, and thus the effect did not seem to be driven by detection of the stopping cue. These findings demonstrate that planning a new word can begin before the initial word is abandoned, so that both words can be processed concurrently.  相似文献   

11.
When speakers repair speech errors, they plan the repair in the context of an abandoned word (the error) that is usually similar in meaning or form. Two picture-naming experiments tested whether the error's lexical representations influence repair planning. Context pictures were sometimes replaced with target pictures; the picture names were related in meaning or form or were unrelated. The authors measured target picture-naming latencies separately for trials in which the context name was interrupted or completed. Interrupted trials showed semantic interference and phonological facilitation, whereas completed trials showed semantic facilitation and phonological interference. Thus, errors influence repair production. The authors explain the polarity of these effects in terms of the literature on context effects in word production.  相似文献   

12.
The effects of orthographic and phonological relatedness between distractor word and object name in a picture–word interference task were investigated. In Experiment 1 distractors were presented visually, and consistent with previous findings, priming effects arising from phonological overlap were modulated by the presence or absence of orthographic similarity between distractor and picture name. This pattern is interpreted as providing evidence for cascaded processing in visual word recognition. In Experiment 2 distractors were presented auditorily, and here priming was not affected by orthographic match or mismatch. These findings provide no evidence for orthographic effects in speech perception and production, contrary to a number of previous reports.  相似文献   

13.
Four experiments investigated acoustic-phonetic similarity in the mapping process between the speech signal and lexical representations (vertical similarity). Auditory stimuli were used where ambiguous initial phonemes rendered a phoneme sequence lexically ambiguous (perceptual-lexical ambiguities). A cross-modal priming paradigm (Experiments 1, 2, and 3) showed facilitation for targets related to both interpretations of the ambiguities, indicating multiple activation. Experiment 4 investigated individual differences and the role of sentence context in vertical similarity mapping. The results support a model where spoken word recognition proceeds via goodness-of-fit mapping between speech and lexical representations that is not influenced by sentence context.  相似文献   

14.
Manipulating inattentional blindness within and across sensory modalities   总被引:1,自引:0,他引:1  
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

15.
In two experiments, eye movements were monitored as participants followed spoken instructions to click on and move pictures with a computer mouse. In Experiment 1, a referent picture (e.g., the picture of a bench) was presented along with three pictures, two of which had names that shared the same initial phonemes as the name of the referent (e.g., bed and bell). Participants were more likely to fixate the picture with the higher frequency name (bed) than the picture with the lower frequency name (bell). In Experiment 2, referent pictures were presented with three unrelated distractors. Fixation latencies to referents with high-frequency names were shorter than those to referents with low-frequency names. The proportion of fixations to the referents and distractors were analyzed in 33-ms time slices to provide fine-grained information about the time course of frequency effects. These analyses established that frequency affects the earliest moments of lexical access and rule out a late-acting, decision-bias locus for frequency. Simulations using models in which frequency operates on resting-activation levels, on connection strengths, and as a postactivation decision bias provided further constraints on the locus of frequency effects.  相似文献   

16.
Two experiments studied perceptual comparisons with cues that vary in one of four ways (picture, sound, spoken word, or printed word) and with targets that are either pictures or environmental sounds. The basic question probed whether modality or differences in format were factors that would influence picture and sound perception. Also of interest were cue effect differences when targets are presented on either the right or left side. Students responded to a same-different reaction time task that entailed matching cue-target pairs to determine whether the successive stimulus events represented features drawn from the same basic item. Cue type influenced reaction times to pictures and environmental sounds, but the effects were qualified by response type and with picture targets by presentation side. These results provide some additional evidence of processing asymmetry when pictures are directed to either the right or left hemisphere, as well as for some asymmetries in cross-modality cuing. Implications of these findings for theories of multisensory processing and models of object recognition are discussed.  相似文献   

17.
Emotional tone of voice (ETV) is essential for optimal verbal communication. Research has found that the impact of variation in nonlinguistic features of speech on spoken word recognition differs according to a time course. In the current study, we investigated whether intratalker variation in ETV follows the same time course in two long-term repetition priming experiments. We found that intratalker variability in ETVs affected reaction times to spoken words only when processing was relatively slow and difficult, not when processing was relatively fast and easy. These results provide evidence for the use of both abstract and episodic lexical representations for processing within-talker variability in ETV, depending on the time course of spoken word recognition.  相似文献   

18.
杨闰荣  韩玉昌  曹洪霞 《心理科学》2006,29(6):1444-1447
使用ERP技术考察了言语产生过程中语音和语义的激活情况。图片上有3种条件的干扰词,分别与目标图片名称形成3种关系:语义相关,语音相同,语义、语音、字形都不相关。结果显示,当被试执行延迟命名任务(实验1)时,与目标图片名称语义相关和控制条件的波形比语音相关的波形更趋于负向。说明在图片命名过程中语音有明显的促进作用。当要求被试对上述图片进行延迟语义判断任务(实验2)时,与目标图片名称语义相关、语音相关及控制条件的波形之间无明显差异。说明在语义提取过程中没有语音的促进作用。综合实验1和实验2,本研究的结果更倾向于支持独立两阶段模型。  相似文献   

19.
Evidence from dual-task performance indicates that speakers prefer not to select simultaneous responses in picture naming and another unrelated task, suggesting a response selection bottleneck in naming. In particular, when participants respond to tones with a manual response and name pictures with superimposed semantically related or unrelated distractor words, semantic interference in naming tends to be constant across stimulus onset asynchronies (SOAs) between the tone stimulus and the picture–word stimulus. In the present study, we examine whether semantic interference in picture naming depends on SOA in case of a task choice (naming the picture vs reading the word of a picture–word stimulus) based on tones. This situation requires concurrent processing of the tone stimulus and the picture–word stimulus, but not a manual response to the tones. On each trial, participants either named a picture or read aloud a word depending on the pitch of a tone, which was presented simultaneously with picture–word onset or 350 ms or 1000 ms before picture–word onset. Semantic interference was present with tone pre-exposure, but absent when tone and picture-word stimulus were presented simultaneously. Against the background of the available studies, these results support an account according to which speakers tend to avoid concurrent response selection, but can engage in other types of concurrent processing, such as task choices.  相似文献   

20.
Four experiments were performed to evaluate the effect of semantic and nonsemantic verbal elaboration of the names of pictures on free recall, picture-name recognition, and picture recognition. Elaboration was manipulated by having subjects decide if the names of pictures contained two letters, rhymed with another word, or were appropriate in a sentence frame. Semantic elaboration of the names of pictures in sentence contexts requiring positive responses resulted in better name recall (Experiment 1) and name recognition (Experiments 1 and 2) than did nonsemantic elaborations (rhyme- and letter-identification tasks). However, the effects of elaborating pictures-names were greatly reduced for picture recognition (Experiments 3 and 4). The results are described in terms of elaborative processing after semantic access. Following initial semantic access, the names of pictures may be further elaborated. Semantic elaboration of the names of pictures typically leads to better retention than does nonsemantic elaboration. However, perceptual records about the appearance of objects may be relatively independent of orienting tasks that elaborate pictures- names.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号