排序方式: 共有58条查询结果,搜索用时 31 毫秒
11.
James M. McQueen 《Cognitive Science》2003,27(5):795-799
Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical–prelexical feedback during on‐line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning. 相似文献
12.
Richard Kammann Rosemary Smith Carey Martin Malcolm McQueen 《Journal of personality》1984,52(2):107-122
The generally low degree of agreement between self-ratings on personality traits and ratings by others may be interpreted from the viewpoint that self-reports reflect people's experience of themselves but not necessarily their behaviors. A detailed analysis of self and other ratings on subjective well-being as a central dimension of human experience is consistent with this phenomenological view. Ratings of well-being were not significantly correlated with rated behaviors either in self-ratings or in ratings by others. Screening subjects in terms of avowed consistency and observability on a trait did not improve self-other agreement for well-being, nor did it replicate the individual trait effects reported by Kenrick and Stringfield (1980). Judgments by others were found to have poor interjudge reliability and to reflect biases associated with projection of own well-being and a halo effect organized around the subject's perceived friendliness or likability. It was demonstrated that pooling the judgments of several observers should not and does not lead to accurate prediction of the phenomenal personality, and that accuracy may generally depend on the level of self-disclosure. 相似文献
13.
A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes. 相似文献
14.
Norris D Butterfield S McQueen JM Cutler A 《Quarterly journal of experimental psychology (2006)》2006,59(9):1505-1515
Participants made visual lexical decisions to upper-case words and nonwords, and then categorized an ambiguous N-H letter continuum. The lexical decision phase included different exposure conditions: Some participants saw an ambiguous letter "?", midway between N and H, in N-biased lexical contexts (e.g., REIG?), plus words with unambiguous H (e.g., WEIGH); others saw the reverse (e.g., WEIG?, REIGN). The first group categorized more of the test continuum as N than did the second group. Control groups, who saw "?" in nonword contexts (e.g., SMIG?), plus either of the unambiguous word sets (e.g., WEIGH or REIGN), showed no such subsequent effects. Perceptual learning about ambiguous letters therefore appears to be based on lexical knowledge, just as in an analogous speech experiment (Norris, McQueen, & Cutler, 2003) which showed similar lexical influence in learning about ambiguous phonemes. We argue that lexically guided learning is an efficient general strategy available for exploitation by different specific perceptual tasks. 相似文献
15.
The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension 总被引:6,自引:0,他引:6
Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process. 相似文献
16.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect). 相似文献
17.
The influence of the lexicon on phonetic categorization: stimulus quality in word-final ambiguity. 总被引:2,自引:0,他引:2
J M McQueen 《Journal of experimental psychology. Human perception and performance》1991,17(2):433-443
The categorization of word-final phonemes provides a test to distinguish between an interactive and an autonomous model of speech recognition. Word-final lexical effects ought to be stronger than word-initial lexical effects, and the models make different reaction time (RT) predictions only for word-final decisions. A first experiment found no lexical shifts between the categorization functions of word-final fricatives in pairs such as fish-fiss and kish-kiss. In a second experiment, with stimuli degraded by low-pass filtering, reliable lexical shifts did emerge. Both models need revision to account for this stimulus-quality effect. Stimulus quality rather than stimulus ambiguity per se determines the extent of lexical involvement in phonetic categorization. Furthermore, the lexical shifts were limited to fast RT ranges, contrary to the interactive model's predictions. These data therefore favor an autonomous bottom-up model of speech recognition. 相似文献
18.
Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal. 相似文献
19.
Are listeners able to adapt to a foreign-accented speaker who has, as is often the case, an inconsistent accent? Two groups of native Dutch listeners participated in a cross-modal priming experiment, either in a consistent-accent condition (German-accented items only) or in an inconsistent-accent condition (German-accented and nativelike pronunciations intermixed). The experimental words were identical for both groups (words with vowel substitutions characteristic of German-accented speech); additional contextual words differed in accentedness (German-accented or nativelike words). All items were spoken by the same speaker: a German native who could produce the accented forms but could also pass for a Dutch native speaker. Listeners in the consistent-accent group were able to adapt quickly to the speaker (i.e., showed facilitatory priming for words with vocalic substitutions). Listeners in the inconsistent-accent condition showed adaptation to words with vocalic substitutions only in the second half of the experiment. These results indicate that adaptation to foreign-accented speech is rapid. Accent inconsistency slows listeners down initially, but a short period of additional exposure is enough for them to adapt to the speaker. Listeners can therefore tolerate inconsistency in foreign-accented speech. 相似文献
20.