首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The perception of consonant clusters that are phonotactically illegal word initially in English (e.g., /tl/, /sr/) was investigated to determine whether listeners’ phonological knowledge of the language influences speech processing. Experiment 1 examined whether the phonotactic context effect (Massaro & Cohen, 1983), a bias toward hearing illegal sequences (e.g., /tl/) as legal (e.g., /tr/), is more likely due to knowledge of the legal phoneme combinations in English or to a frequency effect. In Experiment 2, Experiment 1 was repeated with the clusters occurring word medially to assess whether phonotactic rules of syllabification modulate the phonotactic effect. Experiment 3 examined whether vowel epenthesis, another phonological process, might also affect listeners’ perception of illegal sequences as legal by biasing them to hear a vowel between the consonants of the cluster (e.g., /talee/). Results suggest that knowledge of the phonotactically permissible sequences in English can affect phoneme processing in multiple ways.  相似文献   

2.
This paper examined conceptual versus perceptual priming in identification of incomplete pictures by using a short-term priming paradigm, in which information that may be useful in identifying a fragmented target is presented just prior to the target’s presentation. The target was a picture that slowly and continuously became complete and the participants were required to press a key as soon as they knew what it was. Each target was preceded by a visual prime. The nature of this prime varied from very conceptual (e.g., the name of the picture’s category) to very perceptual (e.g., a similar-shaped pictorial prime from a different category). Primes also included those that combined perceptual and conceptual information (e.g., names or images of the target picture). Across three experiments, conceptual primes were effective while the purely perceptual primes were not. Accordingly, we conclude that pictures in this type of task are identified primarily by conceptual processing, with perceptual processing contributing relatively little.  相似文献   

3.
We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.  相似文献   

4.
In a crossmodal priming experiment, visual targets (e.g. RENARD, 'fox') were auditorily primed by either an intact [see text] 'the fox' or reduced form [see text] 'the fox' of the word. When schwa deletion gave rise to an initial cluster that respected the phonotactic constraints of French (e.g. [lapluz] 'the lawn' in which /pl/ is a legal word beginning in French), there was a processing cost for the targets primed by the reduced form of the word compared to intact primes (e.g. [see text] 'the lawn'). However, when schwa deletion produced an initial cluster that violated the phonotactic constraints of French (e.g. [see text], where /Rn/ is not allowed as a word beginning), there was no penalty for targets primed by reduced compared to intact forms of the word. Assuming that listeners change their phonemic percept when confronted with phonotactically illegal sequences (Segui, Frauenfelder, & Hallé, 2001), phonotactic constraints may help to restore the deleted schwa in sequences like le renard [see text] in French.  相似文献   

5.
We assessed the early encoding of consonant and vowel information in the reading of English, using the fast priming paradigm. With 30-msec prime durations, gaze durations on target words were shorter when preceded by high-frequency consonant-same primes (which shared consonant information with the target word; e.g., lake-like) than when preceded by vowel-same primes (which shared vowel information with the target word; e.g., line-like), but there were no priming effects for low-frequency primes. With 45-msec prime durations, however, there was no effect of prime frequency and gaze durations on target words were shortened equally when they were preceded by consonant-same primes and vowelsame primes, as compared with control primes (e.g., late-like). The results suggest that the processing of consonants is more rapid than that of vowels, providing further evidence for the distinction between consonant and vowel processing in the reading of English.  相似文献   

6.
This event-related potential (ERP) study examined the impact of phonological variation resulting from a vowel merger on phoneme perception. The perception of the /e/-/ε/ contrast which does not exist in Southern French-speaking regions, and which is in the process of merging in Northern French-speaking regions, was compared to the /ø/-/y/ contrast, which is stable in all French-speaking regions. French-speaking participants from Switzerland for whom the /e/-/ε/ contrast is preserved, but who are exposed to different regional variants, had to perform a same-different task. They first heard four phonemically identical but acoustically different syllables (e.g., /be/-/be/-/be/-/be/), and then heard the test syllable which was either phonemically identical to (/be/) or phonemically different from (/bε/) the preceding context stimuli. The results showed that the unstable /e/-/ε/ contrast only induced a mismatch negativity (MMN), whereas the /ø/-/y/ contrast elicited both a MMN and electrophysiological differences on the P200. These findings were in line with the behavioral results in which responses were slower and more error-prone in the /e/-/ε/ deviant condition than in the /ø/-/y/ deviant condition. Together these findings suggest that the regional variability in the speech input to which listeners are exposed affects the perception of speech sounds in their own accent.  相似文献   

7.
In five experiments, we examined lexical competition effects using the phonological priming paradigm in a shadowing task. Experiments 1A and 1B replicate and extend Slowiaczek and Hamburger's (1992) observation that inhibitory effects occur when the prime and the target share the first three phonemes (e.g., /bRiz/-/bRik/) but not when they share the first two phonemes (e.g., /bRepsilonz/-/bRik/). This observation suggests that lexical competition depends on the length of the phonological match between the prime and the target. However, Experiment 2 revealed that an overlap of two phonemes is sufficient to cause an inhibitory effect provided that the primes mismatched the targets only on the last phoneme (e.g., /b[symbol: see text]l/-/b[symbol: see text]t/). Conversely, with a three-phoneme overlap, no inhibition was observed in Experiment 3 when the primes mismatched the targets on the last two phonemes (e.g., /bagepsilont/-/baga3/). In Experiment 4, an inhibitory effect was again observed when the primes mismatched the targets on the last phoneme but not when they mismatched the targets on the last two phonemes when the time between the offset of overlapping segments in the primes and the onset of overlapping segments in the targets was controlled for. The data thus indicate that what essentially determines prime-target competition effects in word-form priming is the number of mismatching phonemes.  相似文献   

8.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

9.
We report four picture-naming experiments in which the pictures were preceded by visually presented word primes. The primes could either be semantically related to the picture (e.g., "boat" - TRAIN: co-ordinate pairs) or associatively related (e.g., "nest" - BIRD: associated pairs). Performance under these conditions was always compared to performance under unrelated conditions (e.g., "flower" - CAT). In order to distinguish clearly the first two kinds of prime, we chose our materials so that (a) the words in the co-ordinate pairs were not verbally associated, and (b) the associate pairs were not co-ordinates. Results show that the two related conditions behaved in different ways depending on the stimulus-onset asynchrony (SOA) separating word and picture appearance, but not on how long the primes were presented. When presented with a brief SOA (114 ms, Experiment 1), the co-ordinate primes produced an interference effect, but the associated primes did not differ significantly from the unrelated primes. Conversely , with a longer SOA (234 ms, Experiment 2) the co-ordinate primes produced no effect, whereas a significant facilitation effect was observed for associated primes, independent of the duration of presentation of the primes. This difference is interpreted in the context of current models of speech production as an argument for the existence, at an automatic processing level, of two distinguishable kinds of meaning relatedness.  相似文献   

10.
We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.  相似文献   

11.
The present study used event-related potentials (ERPs) to examine the time course of orthographic and phonological priming in the masked priming paradigm. Participants monitored visual target words for occasional animal names, and ERPs to nonanimal critical items were recorded. These critical items were preceded by different types of primes: Orthographic priming was examined using transposed-letter (TL) primes (e.g., barin-BRAIN) and their controls (e.g., bosin-BRAIN); phonological priming was examined using pseudohomophone primes (e.g., brane-BRAIN) and their controls (e.g., brant-BRAIN). Both manipulations modulated the N250 ERP component, which is hypothesized to reflect sublexical processing during visual word recognition. Orthographic (TL) priming and phonological (pseudohomophone) priming were found to have distinct topographical distributions and different timing, with orthographic effects arising earlier than phonological effects.  相似文献   

12.
Recent work on perceptual learning shows that listeners' phonemic representations dynamically adjust to reflect the speech they hear (Norris, McQueen, & Cutler, 2003). We investigate how the perceptual system makes such adjustments, and what (if anything) causes the representations to return to their pre-perceptual learning settings. Listeners are exposed to a speaker whose pronunciation of a particular sound (either /s/ or /integral/) is ambiguous (e.g., halfway between /s/ and /integral/). After exposure, participants are tested for perceptual learning on two continua that range from /s/ to /integral/, one in the Same voice they heard during exposure, and one in a Different voice. To assess how representations revert to their prior settings, half of Experiment 1's participants were tested immediately after exposure; the other half performed a 25-min silent intervening task. The perceptual learning effect was actually larger after such a delay, indicating that simply allowing time to pass does not cause learning to fade. The remaining experiments investigate different ways that the system might unlearn a person's pronunciations: listeners hear the Same or a Different speaker for 25 min with either: no relevant (i.e., 'good') /s/ or /integral/ input (Experiment 2), one of the relevant inputs (Experiment 3), or both relevant inputs (Experiment 4). The results support a view of phonemic representations as dynamic and flexible, and suggest that they interact with both higher- (e.g., lexical) and lower-level (e.g., acoustic) information in important ways.  相似文献   

13.
Articulatory and acoustic studies of speech production have shown that the effects of anticipatory coarticulation may extend across several segments of an utterance. The present experiments show that such effects have perceptual significance. In two experiments, a talker produced consonant (C) and vowel (V) sequences in a sentence frame (e.g., "I say pookee") of the form "I say / C V1 C V2/" in which V1 was /u, ae/ and V2 was /i, a/. Each /i, a/ sentence pair was cross-spliced by exchanging the final syllable /C V2/ so that coarticulatory information prior to the crosspoint was inappropriate for te final vowel (V2) in crossed sentences. Recognition time (RT) for V2 in crossed and intact (as spoken) sentences was obtained from practiced listeners. In both experiments RT was slower in crossed sentences; crossed sentences also attracted more false alarms. The pattern of perceptual results was mirrored in the pattern of precross acoustic differences in experimental sentences (e.g., formants F2 and F3 were higher preceding /i/ than preceding /a/). Pretarget variation in the formants jointly predicted amount of RT interference in crossed sentences. A third experiment found interference (slower RT) and also facilitation (faster RT) from exchanges of pretarget coarticulatory information in sentences. Two final experiments showed that previous results were not dependent on the use of practiced listeners.  相似文献   

14.
Schiller NO 《Cognition》2008,106(2):952-962
Reading aloud is faster when targets (e.g., PAIR) are preceded by visually masked primes sharing just the onset (e.g., pole) compared to all different primes (e.g., take). This effect is known as the masked onset priming effect (MOPE). One crucial feature of this effect is its presumed non-lexical basis. This aspect of the MOPE is tested in the current study. Dutch participants named pictures having bisyllabic names, which were preceded by visually masked primes. Picture naming was facilitated by first-segment but not last-segment primes, and by first-syllable as well as last-syllable primes. Whole-word primes with first or last segment overlap slowed down picture naming latencies significantly. The first-segment priming effect (i.e., MOPE) cannot be accounted for by non-lexical response competition since pictures cannot be named via the non-lexical route. Instead, the effects obtained in this study can be accommodated by a speech-planning account of the MOPE.  相似文献   

15.
Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.  相似文献   

16.
This study examined the impact of self-referential information at early stages of emotional word processing using an affective masked-priming paradigm in which positive (e.g, espetacular[awesome]) and negative (e.g., horrível[awful]) trait-adjectives were preceded by briefly primes that could be self-related (Eu sou[I am]), other-related (Ela é[She is]), or a control (%%%%%). Trait-adjectives were selected from female norms and only females participants were used to control for sex differences. Results showed that positive words were categorised faster when preceded by self-related primes than by other-related primes, though not control primes. Negative trait-adjectives were not modulated by the type of prime, even though participants were slower when they were preceded by other-related than by control primes. These findings demonstrate that taking the other-perspective entails a cost, and that the amount of priming produced by self-related and control primes was virtually the same, thus suggesting that assuming the self-perspective is a cognitively effortless process.  相似文献   

17.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

18.
Phonological priming of spoken words refers to improved recognition of targets preceded by primes that share at least one of their constituent phonemes (e.g., BULL-BEER). Phonetic priming refers to reduced recognition of targets preceded by primes that share no phonemes with targets but are phonetically similar to targets (e.g., BULL-VEER). Five experiments were conducted to investigate the role of bias in phonological priming. Performance was compared across conditions of phonological and phonetic priming under a variety of procedural manipulations. Ss in phonological priming conditions systematically modified their responses on unrelated priming trials in perceptual identification, and they were slower and more errorful on unrelated trials in lexical decision than were Ss in phonetic priming conditions. Phonetic and phonological priming effects display different time courses and also different interactions with changes in proportion of related priming trials. Phonological priming involves bias; phonetic priming appears to reflect basic properties of activation and competition in spoken word recognition.  相似文献   

19.
The ability of English speakers to monitor internally and externally generated words for syllables was investigated in this paper. An internal speech monitoring task required participants to silently generate a carrier word on hearing a semantically related prompt word (e.g., reveal—divulge). These productions were monitored for prespecified target strings that were either a syllable match (e.g., /dai/), a syllable mismatch (e.g., /daiv/), or unrelated (e.g., /hju:/) to the initial syllable of the word. In all three experiments the longer target sequence was monitored for faster. However, this tendency reached significance only when the longer string also matched a syllable in the carrier word. External speech versions of each experiment were run that yielded a similar influence of syllabicity but only when the syllable match string also had a closed structure. It was concluded that any influence of syllabicity found using either task reflected the properties of a shared perception-based monitoring system.  相似文献   

20.
Three lexical decision experiments examined the conditions in which nonwords activate semantics. Lexical decisions to targets (e.g., CAT) were faster when preceded by semantically related nonword primes (e.g., DEG derived from DOG) when the prime was brief and masked; this nonword priming effect was eliminated when the prime was presented for a longer duration. These results are discussed in the context of both parallel distributed processing models and the idea that the occurrence of nonword priming depends upon subjects being unable to verify the identity of the prime.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号