首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words.  相似文献   

2.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

3.
Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker’s identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.  相似文献   

4.
A mixed-modality (visual and auditory) continuous recognition task, followed immediately by a final recognition test, was administered to young (18-23 years), mid-life (38-50 years), and older (60-74 years) women. Subjects gave recognition responses for both the words and their presentation modality. Although older adults remembered less information about input mode than did the two younger groups, the age decrement was not the result of faster forgetting of such information by the elderly. When a ceiling effect at the initial lag was taken into account, forgetting rates for both words and input mode were comparable across the adult life span.  相似文献   

5.
6.
The context effect in implicit memory is the finding that presentation of words in meaningful context reduces or eliminates repetition priming compared to words presented in isolation. Virtually all of the research on the context effect has been conducted in the visual modality but preliminary results raise the question of whether context effects are less likely in auditory priming. Context effects in the auditory modality were systematically examined in five experiments using the auditory implicit tests of word-fragment and word-stem completion. The first three experiments revealed the classical context effect in auditory priming: Words heard in isolation produced substantial priming, whereas there was little priming for the words heard in meaningful passages. Experiments 4 and 5 revealed that a meaningful context is not required for the context effect to be obtained: Words presented in an unrelated audio stream produced less priming than words presented individually and no more priming than words presented in meaningful passages. Although context effects are often explained in terms of the transfer-appropriate processing (TAP) framework, the present results are better explained by Masson and MacLeod's (2000) reduced-individuation hypothesis.  相似文献   

7.
High-frequency words are recalled better than are low-frequency words, but low-frequency words produce higher hit rates in a recognition test than do high-frequency words. Two experiments provided new date on the phenomenon and also evidence relevant to the dual process model of recognition, which postulates that recognition judgments are a function of increments in item familiarity and of item retrievability. First, recall and recognition by subjects who initially performed a single lexical decision task were compared with those of subjects who also gave definitions of high-, low-, and very low-frequency target words. In the second experiment, subjects initially performed either a semantic, elaborative task or an integrative task that focused attention on the physical, perceptual features of the same words. Both experiments showed that extensive elaborative processing results in higher recall and hit rates but lower false alarm rates, whereas word frequency has a monotonic, linear effect on recall and false alarm rates, but a paradoxical, curvilinear effect on hit rates. Elaboration is apparently more effective when the potential availability of meaningful connections with other structures is greater (as for high-frequency words). The results are consistent with the dual process model.  相似文献   

8.
The present study compared the effects of two classes of experimental manipulations on the recognition and cued recall of target words learned in the presence of list cues. One class of manipulations, learning instructions (repetition vs. meaningful rehearsal), had similar effects on recall and recognition, whereas the other, preexperimental association between target and cue words, had separable effects: Cue-to-target association affected only recall, while target-to-cue association affected only recognition performance. Recall and recognition were thus viewed as fundamentally similar processes, both of which require retrieval operations. Differences between the two were attributed to the differential abilities of the recall and recognition retrieval cues to access the original episodic event.  相似文献   

9.
Two experiments demonstrated the way in which musicians and nonmusicians process realistic music encountered for the first time. A set of tunes whose members were related to each other by a number of specific musical relationships was constructed. In Experiment 1, subjects gave similarity judgments of all pairs of tunes, which were analyzed by the ADDTREE clustering program. Musicians and nonmusicians gave essentially equivalent results: Tunes with different rhythms were rated as being very dissimilar, whereas tunes identical except for being in a major versus a minor mode were rated as being highly similar. In Experiment 2, subjects learned to identify the tunes, and their errors formed a confusion matrix. The matrix was submitted to a clustering analysis. Results from the two experiments corresponded better for the nonmusicians than for the musicians. Musicians presumably exceed nonmusicians in the ability to categorize music in multiple ways, but even nonmusicians extract considerable information from newly heard music.  相似文献   

10.
Although there is evidence for preferential perceptual processing of written emotional information, the effects of attentional manipulations and the time course of affective processing require further clarification. In this study, we attempted to investigate how the emotional content of words modulates cerebral functioning (event-related potentials, ERPs) and behavior (reaction times, RTs) when the content is task-irrelevant (emotional Stroop Task, EST) or task-relevant (emotional categorization task, ECT), in a sample of healthy middle-aged women. In the EST, the RTs were longer for emotional words than for neutral words, and in the ECT, they were longer for neutral and negative words than for positive words. A principal components analysis of the ERPs identified various temporospatial factors that were differentially modified by emotional content. P2 was the first emotion-sensitive component, with enhanced factor scores for negative nouns across tasks. The N2 and late positive complex had enhanced factor scores for emotional relative to neutral information only in the ECT. The results reinforce the idea that written emotional information has a preferential processing route, both when it is task-irrelevant (producing behavioral interference) and when it is task-relevant (facilitating the categorization). After early automatic processing of the emotional content, late ERPs become more emotionally modulated as the level of attention to the valence increases.  相似文献   

11.
Phonology and orthography are closely related in some languages, such as English, and they are nearly unrelated in others, such as Chinese. The effects of these differences were assessed in a study of the roles of phonemic, graphemic, and semantic information on lexical coding and memory for Chinese logographs and English words. Some of the stimuli in the two languages were selected such that the natural confounding between phonemic and graphemic information in English was matched in the set of Chinese words used. An initial scaling study indicated that this attempt to equate degree of phonemic-graphemic confounding was successful. A second experiment used a recognition memory task for English and Chinese words with separate subject groups of native speakers of the two languages. Subjects were to select one of a pair of test words that was phonemically, graphemically, or semantically similar to a word on a previously studied list. Differences in the dimensions of lexical coding in memory were demonstrated in significant Stimulus Type by Decision Type interactions in the recognition data. Chinese-speaking subjects responded most rapidly and accurately in the graphemic recognition task, whereas performance was generally equivalent in all three tasks for the English-speaking subjects. Alphabetic and logographic writing systems apparently activate different coding and memory mechanisms such that logographic characters produce significantly more visual information in memory, whereas alphabetic words result in a more integrated code involving visual, phonological, and semantic information.  相似文献   

12.
Several experiments examined repetition priming among morphologically related words as a tool to study lexical organization. The first experiment replicated a finding by Stanners, Neiser, Hernon, and Hall (Journal of Verbal Learning and Verbal Behavior, 1979,18, 399-412), that whereas inflected words prime their unaffixed morphological relatives as effectively as do the unaffixed forms themselves, derived words are effective, but weaker, primes. The experiment also suggested, however, that this difference in priming may have an episodic origin relating to the less formal similarity of derived than of inflected words to unaffixed morphological relatives. A second experiment reduced episodic contributions to priming and found equally effective priming of unaffixed words by themselves, by inflected relatives, and by derived relatives. Two additional experiments found strong priming among relatives sharing the spelling and pronunciation of the unaffixed stem morpheme, sharing spelling alone, or sharing neither formal property exactly. Overall, results with auditory and visual presentations were similar. Interpretations that repetition priming reflects either repeated access to a common lexical entry or associative semantic priming are both rejected in favor of a lexical organization in which components of a word (e.g., a stem morpheme) may be shared among distinct words without the words themselves, in any sense, sharing a “lexical entry.”  相似文献   

13.
The effects of word frequency, word length, and practice were examined in oral productions of subjects reading lists of 25 rare or common monosyllabic words. Articulation and pause durations, their ratio, and total reading durations were derived from recordings of subjects’ speech. Recorded speech was sampled at 10 kHz, and a criterion of eight times the mean noise level was used to classify productions as articulation or pause. Lists of high frequency words were read more quickly than lists of low-frequency words. No differences were observed in the articulation component. Pause duration was greater for rare than for common words. The ratio of pause to articulation varied with length and word type. No differences were found for high-frequency words, but the ratio of five-letter words was significantly greater than that of three- or four-letter rare words. Results were discussed in relation to the nature and locus of the word-frequency effect. Criteria for defining and measuring speech productions were also raised.  相似文献   

14.
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998 Mack, A. and Rock, I. 1998. Inattentional blindness, Cambridge, MA: MIT Press. [Crossref] [Google Scholar]). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

15.
During presentation of auditory and visual lists of words, different groups of subjects generated words that either rhymed with the presented words or that were associates. Immediately after list presentation, subjects recalled either the presented or the generated words. After presentation and test of all lists, a final free recall test and a recognition test were given. Visual presentation generally produced higher recall and recognition than did auditory presentation for both encoding conditions. The results are not consistent with explanations of modality effects in terms of echoic memory or greater temporal distinctiveness of auditory items. The results are more in line with the separate-streams hypothesis, which argues for different kinds of input processing for auditory and visual items.  相似文献   

16.
Temporal order recognition memory has been examined previously with tasks involving a recency judgment between a pair of items in a preceding string. Recency judgments are impaired when the earlier item is repeated. The present study employed the comparative recency judgment paradigm, with the lists composed of words. The effect of the inclusion in the list of a high associate of the earlier test item was examined and compared to the effect of repetition. Associative interference was observed, but not in all conditions. Direction of association was a significant factor. The results were interpreted in terms of a model of word recognition proposed by Morton.  相似文献   

17.
These investigations examined subjects’ serial recall of lipread digit lists accompanied by an auditory pulse train. The pulse train indicated the pitch of voiced speech (buzz-speech) of the seen speaker as she was speaking. As a purely auditory signal, it could not support item identification. Such buzz-speech recall was compared with silent lipread list recall and with the recall of buzz-speech lists to which a pure tone had been added (buzz-and-beep lists). No significant difference in overall accuracy of recall emerged for the three types of lipread list; however, there were significant differences in the shape of the serial recall function for the three list types. Recency characterized the silent and the buzz-speech lists, and these lists differed in their varying susceptibilities to a range of speechlike suffixes. By contrast, adding a pure tone to a buzz-speech list (buzz-and-beep) produced little recency and no further recall loss as a function of suffix type. We discuss these effects with reference to the contrast betweensensory-similarity and speechlikeness accounts of auditory recency and suffix effects. Sensory similarity accounts cannot capture the effects reported here, but processing in a speech mode (buzz-and-beep) need not always lead to recency effects like those resulting from clearly heard or lipread lists.  相似文献   

18.
Research with antisocial individuals suggests that callous-unemotional (CU) traits, a dimension of psychopathy, consistently predict severe antisocial behaviours and correlate with deficits in recognizing negative emotions, especially fearful facial expressions. However, the generalizability of these findings to non-antisocial populations remains uncertain and largely unexplored. This small, exploratory study aimed to extend this research by measuring CU traits and facial emotion recognition in university students with Attention-Deficit Hyperactivity Disorder (ADHD), learning disabilities, other psychiatric disorders, and comparison participants with physical/sensory disabilities. As the clinical groups can exhibit deficits in emotion recognition, this study sought to shed light on the candidacy of CU traits as a factor in emotion recognition. Results suggested that individuals in the diagnostic groups possess similar levels of CU traits to the comparison group and that the relationship between CU traits and emotion recognition deficits previously seen in antisocial populations is not present in this sample. Contrary to the hypothesis, those in the diagnostic groups displayed similar levels of accuracy on an emotion recognition task as the comparison group. Recommendations are made for future research to use more specific and representative diagnostic populations to further assess the relationships between CU traits and emotion recognition in non-antisocial populations.  相似文献   

19.
Two studies used a new paradigm to examine preschool children's (i.e., 2-, 3-, 4-, and 5-year-olds) word learning across multiple sense modalities. In Study 1 (n = 60), children heard a word for an object that they touched but did not see, while word learning was examined using objects that were seen but not touched. In Study 2 (n = 60), children heard a word for an object that they saw but did not touch, while word learning was examined using objects that were touched but not seen. Findings from both studies revealed that children were able to learn words by coordinating information across multiple sense modalities and that word learning improved with age. These findings are discussed in terms of E. J. Gibson's differentiation theory (1969 Gibson , E. J. ( 1969 ). Principles of perceptual learning and development . Englewood Cliffs , NJ : Prentice Hall . [Google Scholar], 1988 Gibson , E. J. ( 1988 ). Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge . Annual Review of Psychology , 39 , 141 .[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

20.
The effects of context on item-based directed forgetting were assessed. Study words were presented against different background pictures and were followed by a cue to remember (R) or forget (F) the target item. The effects of incidental and intentional encoding of context on recognition of the study words were examined in Experiments 1 and 2. Recognition memory for the picture contexts was assessed in Experiments 3a and 3b. Recognition was greater for R-cued compared to F-cued targets, demonstrating an effect of directed forgetting. In contrast, no directed forgetting effect was seen for the background pictures. An effect of context-dependent recognition was seen in Experiments 1 and 2, such that the hit rate and the false-alarm rate were greater for items tested in an old compared to a novel context. An effect of context-dependent discrimination was also observed in Experiment 2 as the hit rate was greater for targets shown in their same old study context compared to a different old context. The effects of context and directed forgetting did not interact. The results are consistent with Malmberg and Shiffrin’s (Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 322–336, 2005) “one-shot” context storage hypothesis that assumes that a fixed amount of context is stored in the first 1 to 2 s of the presentation of the study item. The effects of context are independent of item-based directed forgetting because context is encoded prior to the R or F cue, and the differential processing of target information that gives rise to the directed forgetting effect occurs after the cue.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号