首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Learning lyrics: To sing or not to sing?   总被引:1,自引:0,他引:1  
According to common practice and oral tradition, learning verbal materials through song should facilitate word recall. In the present study, we provide evidence against this belief. In Experiment 1, 36 university students, half of them musicians, learned an unfamiliar song in three conditions. In the sung-sung condition, the song to be learned was sung, and the response was sung too. In the sung-spoken condition, the response was spoken. In the divided-spoken condition, the presented lyrics (accompanied by music) and the response were both spoken. Superior word recall in the sung-sung condition was predicted. However, fewer words were recalled when singing than when speaking. Furthermore, the mode of presentation, whether sung or spoken, had no influence on lyric recall, in either short- or long-term recall. In Experiment 2, singing was assessed with and without words. Altogether, the results indicate that the text and the melody of a song have separate representations in memory, making singing a dual task to perform, at least in the first steps of learning. Interestingly, musical training had little impact on performance, suggesting that vocal learning is a basic and widespread skill.  相似文献   

2.
The role of left and right temporal lobes in memory for songs (words sung to a tune) was investigated. Patients who had undergone focal cerebral excision for the relief of intractable epilepsy along with normal control subjects were tested in 2 recognition memory tasks. The goal of Experiment 1 was to examine recognition of words and of tunes when they were presented together in an unfamiliar song. In Experiment 2, memory for spoken words and tunes sung without words was independently tested in 2 separate recognition tasks. The results clearly showed (a) a deficit after left temporal lobectomy in recognition of text whether sung to a tune or spoken without musical accompaniment, (b) impaired melody recognition when the tune was sung with new words following left or right temporal lobectomy and (c) impaired melody recognition in the absence of lyrics following right but not left temporal lobectomy. The different role of each temporal lobe in memorizing songs provides evidence for the use of dual memory codes. The verbal code is consistently related to left temporal lobe structures, whereas the melodie code my depend on either or both temporal lobe mechanisms, according to the type of encoding involved.  相似文献   

3.
A priming technique was employed to study the relations between melody and lyrics in song memory. The procedure involved the auditory presentation of a prime and a target taken from the same song, or from unrelated but equally familiar songs. To promote access to memory representations of songs, we varied the format of primes and targets, which were either spoken or sung, using the syllable /1a/. In each of the four experiments, a prime taken from the same song as the target facilitated target recognition, independently of the format in which it occurred. The facilitation effects were also found in conditions close to masked priming because prime recognizability was very low, as assessed in Experiment 1 by d' measures. Above all, backward priming effects were observed in Experiments 2, 3, and 4, where song order was reversed in the prime-target sequence, suggesting that words and tones of songs are not connected by strict temporal contingencies. Rather, the results indicate that, in song memory, text and tune are related by tight connections that are bidirectional and automatically activated by relatively abstract information. Rhythmic similarity between linguistic stress pattern and musical meter might account for these priming effects.  相似文献   

4.
Recent findings suggest that infants can remember words from stories over 2 week delays (Jusczyk, P. W., & Hohne, E. A. (1997). Infants' memory for spoken words. Science, 277, 1984-1986). Because music, like language, presents infants with a massively complex auditory learning task, it is possible that infant memory for musical stimuli is equally powerful. Seven-month-old infants heard two Mozart sonata movements daily for 2 weeks. Following a 2 week retention interval, the infants were tested on passages of the familiarized music, and passages taken from similar but novel music. Results from two experiments suggest that the infants retained the familiarized music in long-term memory, and that their listening preferences were affected by the extent to which familiar passages were removed from the musical contexts within which they were originally learned.  相似文献   

5.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

6.
The present research addresses whether music training acts as a mediator of the recall of spoken and sung lyrics and whether presentation rate is the essential variable, rather than the inclusion of melody. In Experiment 1, 78 undergraduates, half with music training and half without, heard spoken or sung lyrics. Recall for sung lyrics was superior to that for spoken lyrics for both groups. In Experiments 2 and 3, presentation rate was manipulated so that the durations of the spoken and the sung materials were equal. With presentation rate equated, there was no advantage for sung over spoken lyrics. In all the experiments, those participants with music training outperformed those without training in all the conditions. The results suggest that music training leads to enhanced memory for verbal material. Previous findings of melody's aiding text recall may be attributed to presentation rate.  相似文献   

7.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

8.
This study sought to determine if the retention capacities of learning disabled children with strong visual-spatial skills and weak verbal skills would be improved if verbal material was presented within a musical context. It also sought to determine if children with the reverse pattern of abilities would retain less verbal information if it was presented in a context of music. Twelve verbally oriented and 12 visual-spatially oriented learning disabled children between the ages of 9 and 11 matched on IQ, sex, race, and age participated. Subjects listened in a counter-balanced fashion to: (a) lyrics sung with instrumental musical accompanimen; (b) lyrics sung without instrumental accompaniment; (c) lyrics spoken with instrumental musical accompaniment; and (d) lyrics spoken without musical accompaniment. Immediately following the presentation of each condition they were tested for recall and recognition of the verbal material contained in the lyrics. The same tests were administered three days later. Subjects in the visual-spatial group obtained significantly higher recognition scores when the lyrics were sung than when they were spoken whether instrumental musical accompaniment was or was not present. Scores of the verbally oriented group did not differ across conditions. Theoretical and practical implications of the results are discussed.  相似文献   

9.
ABSTRACT

This study examines musical memory in 12 patients with moderate or severe AD and 12 healthy, older adult controls. Participants were asked to distinguish familiar from novel tunes, to identify distortions in melodies, and to sing familiar tunes. Comparison of the AD and control groups showed significant impairment of the AD participants. However, a more complex picture emerged as we compared each individual case to the control group. Five of the AD group performed within the control group range on most tasks. An additional four participants showed partial sparing in that they performed below the range of control participants, but their scores were above the level of chance. The final three participants showed near complete loss of musical memory, as their performance was consistently at or near the level of chance. These results are discussed in terms of the literature on the heterogeneity of cognitive presentation in AD.  相似文献   

10.
Two experiments examined whether the memory representation for songs consists of independent or integrated components (melody and text). Subjects heard a serial presentation of excerpts from largely unfamiliar folksongs, followed by a recognition test. The test required subjects to recognize songs, melodies, or texts and consisted of five types of items: (a) exact songs heard in the presentation; (b) new songs; (c) old tunes with new words; (d) new tunes with old words; and (e) old tunes with old words of a different song from the same presentation (‘mismatch songs’). Experiment 1 supported the integration hypothesis: Subjects' recognition of components was higher in exact songs (a) than in songs with familiar but mismatched components (e). Melody recognition, in particular, was near chance unless the original words were present. Experiment 2 showed that this integration of melody and text occurred also across different performance renditions of a song and that it could not be eliminated by voluntary attention to the melody.  相似文献   

11.
In spite of a large body of empirical research demonstrating the importance of multisensory integration in cognition, there is still little research about multimodal encoding and maintenance effects in working memory. In this study we investigated multimodal encoding in working memory by means of an immediate serial recall task with different modality and format conditions. In a first non-verbal condition participants were presented with sequences of non-verbal inputs representing familiar (concrete) objects, either in visual, auditory or audio-visual formats. In a second verbal condition participants were presented with written, spoken, or bimodally presented words denoting the same objects represented by pictures or sounds in the non-verbal condition. The effects of articulatory suppression were assessed in both conditions. We found a bimodal superiority effect on memory span with non-verbal material, and a larger span with auditory (or bimodal) versus visual presentation with verbal material, with a significant effect of articulatory suppression in the two conditions.  相似文献   

12.
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.  相似文献   

13.
The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants’ performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.  相似文献   

14.
Abstract

The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the ‘target’ item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.  相似文献   

15.
Previous research has suggested that the use of song can facilitate recall of text. This study examined the effect of repetition of a melody across verses, familiarity with the melody, rhythm, and other structural processing hypotheses to explain this phenomenon. Two experiments were conducted, each with 100 participants recruited from undergraduate Psychology programs (44 men, 156 women, M age = 28.5 yr., SD = 9.4). In Exp. 1, participants learned a four-verse ballad in one of five encoding conditions (familiar melody, unfamiliar melody, unknown rhythm, known rhythm, and spoken). Exp. 2 assessed the effect of familiarity in rhythm-only conditions and of pre-exposure with a previously unfamiliar melody. Measures taken were number of verbatim words recalled and number of lines produced with correct syllabic structure. Analysis indicated that rhythm, with or without musical accompaniment, can facilitate recall of text, suggesting that rhythm may provide a schematic frame to which text can be attached. Similarly, familiarity with the rhythm or melody facilitated recall. Findings are discussed in terms of integration and dual-processing theories.  相似文献   

16.
The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word–picture and nonword–picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.  相似文献   

17.
We sought to establish whether novel words can become integrated into existing semantic networks by teaching participants new meaningful words and then using these new words as primes in two semantic priming experiments, in which participants carried out a lexical decision task to familiar words. Importantly, at no point in training did the novel words co-occur with the familiar words that served as targets in the primed lexical decision task, allowing us to evaluate semantic priming in the absence of direct association. We found that familiar words were primed by the newly related novel words, both when the novel word prime was unmasked (Experiment 1) and when it was masked (Experiment 2), suggesting that the new words had been integrated into semantic memory. Furthermore, this integration was strongest after a 1-week delay and was independent of explicit recall of the novel word meanings: Forgetting of meanings did not attenuate priming. We argue that even after brief training, newly learned words become an integrated part of the adult mental lexicon rather than being episodically represented separately from the lexicon.  相似文献   

18.
The present study was designed to examine age differences in the ability to use voice information acquired intentionally (Experiment 1) or incidentally (Experiment 2) as an aid to spoken word identification. Following both implicit and explicit voice learning, participants were asked to identify novel words spoken either by familiar talkers (ones they had been exposed to in the training phase) or by 4 unfamiliar voices. In both experiments, explicit memory for talkers' voices was significantly lower in older than in young listeners. Despite this age-related decline in voice recognition, however, older adults exhibited equivalent, and in some cases greater, benefit than young listeners from having words spoken by familiar talkers. Implications of the findings for age-related changes in explicit versus implicit memory systems are discussed.  相似文献   

19.
A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation.  相似文献   

20.
Three experiments were designed to investigate two explanations for the integration effect in memory for songs (Serafine, Crowder, & Repp, 1984; Serafine, Davidson, Crowder, & Repp, 1986). The integration effect is the finding that recognition of the melody (or text) of a song is better in the presence of the text (or melody) with which it had been heard originally than in the presence of a different text (or melody). One explanation for this finding is the physical interaction hypothesis, which holds that one component of a song exerts subtle but memorable physical changes on the other component, making the latter different from what it would be with a different companion. In Experiments 1 and 2, we investigated the influence that words could exert on the subtle musical character of a melody. A second explanation for the integration effect is the association-by-contiguity hypothesis, which holds that any two events experienced in close temporal proximity may become connected in memory such that each acts as a recall cue for the other. In Experiment 3, we investigated the degree to which simultaneous presentations of spoken text with a hummed melody would induce an association between the two components. The results gave encouragement for both explanations and are discussed in terms of the distinction between encoding specificity and independent associative bonding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号