首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments examined whether the memory representation for songs consists of independent or integrated components (melody and text). Subjects heard a serial presentation of excerpts from largely unfamiliar folksongs, followed by a recognition test. The test required subjects to recognize songs, melodies, or texts and consisted of five types of items: (a) exact songs heard in the presentation; (b) new songs; (c) old tunes with new words; (d) new tunes with old words; and (e) old tunes with old words of a different song from the same presentation (‘mismatch songs’). Experiment 1 supported the integration hypothesis: Subjects' recognition of components was higher in exact songs (a) than in songs with familiar but mismatched components (e). Melody recognition, in particular, was near chance unless the original words were present. Experiment 2 showed that this integration of melody and text occurred also across different performance renditions of a song and that it could not be eliminated by voluntary attention to the melody.  相似文献   

2.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants' ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

3.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants’ ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

4.
Listeners can perceive a person’s age from their voice with above chance accuracy. Studies have usually established this by asking listeners to directly estimate the age of unfamiliar voices. The recordings used mostly include cross-sectional samples of voices, including people of different ages to cover the age range of interest. Such cross-sectional samples likely include not only cues to age in the sound of the voice but also socio-phonetic cues, encoded in how a person speaks. How age perpcetion accuracy is affected when minimizing socio-phonetic cues by sampling the same voice at different time points remains largely unknown. Similarly, with the voices in age perception studies being usually unfamiliar to listeners, it is unclear how familiarity with a voice affects age perception. We asked listeners who were either familiar or unfamiliar with a set of four voices to complete an age discrimination task: listeners heard two recordings of the same person’s voice, recorded 15 years apart, and were asked to indicate in which recording the person was younger. Accuracy for both familiar and unfamiliar listeners was above chance. While familiarity advantages were apparent, accuracy was not particularly high: familiar and unfamiliar listeners were correct for 68.2% and 62.7% of trials, respectively (chance = 50%). Familiarity furthermore interacted with the voices included. Overall, our findings indicate that age perception from voices is not a trivial task at all times – even when listeners are familiar with a voice. We discuss our findings in the light of how reliable voice may be as a signal for age.  相似文献   

5.
The author investigated voice context effects in recognition memory for words spoken by multiple talkers by comparing performance when studied words were repeated with same, different, or new voices at test. Hits and false alarms increased when words were tested with studied voices compared with unstudied voices. Discrimination increased only when the exact same voice was used. A trend toward conservatism in response bias was observed when test words switched to increasingly unfamiliar voices. Taken together, the overall findings suggest that the voice-specific attributes of individual talkers are preserved in long-term memory. Implications for the role of instance-specific matching and voice-specific familiarity processes and the nature of spoken-word representation are discussed.  相似文献   

6.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed.  相似文献   

7.
The patterns of perceptual asymmetry elicited by dichotic speech and complex pitch stimuli were evaluated in a group of 28 normal, right-handed subjects. As in previous studies, between 70 and 75% of the subjects showed a right-ear advantage for speech and left-ear advantage for pitch. However, less than half of the subjects (46%) showed the expected pattern on both tests. It is argued that the assumption of symmetrical, contralateral auditory pathway superiority during dichotic stimulation is only appropriate in roughly half of the dextral population. In the remaining half, significant subcortical asymmetries and/or a lack of contralateral advantage appear to be present. The assessment of complementary cortical functions should provide a way to reduce the confounding of cortical and subcortical contributions to auditory perceptual asymmetries, and thus provide a more accurate behavioral index of brain organization.  相似文献   

8.
Generating stimuli at encoding typically improves memory for occurrence (item memory) but might disrupt memory for order. In three experiments, the relationship between generation and order memory was examined by using familiar stimuli, which give rise to the standard generation advantage in item memory, and unfamiliar stimuli, which do not. The participants generated or read words and non-words in Experiments 1 and 2 and familiar and unfamiliar word compounds in Experiment 3. For the familiar stimuli, generation enhanced item memory (as measured by recognition) but disrupted performance on the order-reconstruction test. For the unfamiliar stimuli, generation produced no recognition advantage and yet persisted in disrupting order reconstruction. Thus, the positive effects of generation on item memory were dissociated from its negative impact on order memory.  相似文献   

9.
Three independent groups of 8 subjects listened monaurally to a randomized list of 50 common and 50 rare words and responded ‘yes’ to rare words. Responses were analyzed for correct identifications of rare words. A 50:50 group heard an equal number of words in right and left ears and gave a small but non-significant right-ear superiority. A 66:34 group heard nearly twice as many words in the right ear as the left and gave a significant right-ear superiority. A 34:66 group heard nearly twice as many words in the left ear as the right and gave a significant left-ear superiority. Responses were also analyzed according to signal-detection theory. Implications are discussed for theories of ear advantages.  相似文献   

10.
By most theories of lexical access, idiosyncratic aspects of speech (such as voice details) are considered noise and are filtered in perception. However, episodic theories suggest that perceptual details are stored in memory and mediate later perception. By this view, perception and memory are intimately linked. The present investigation tested this hypothesis by creating symmetric illusions, using words and voices. In two experiments, listeners gave reduced noise estimates to previously heard words, but only when the original voices were preserved. Conversely, in two recognition memory experiments, listeners gave increased old responses to words (or voices) presented in relatively soft background noise. The data suggest that memory can be mistaken for perceptual fluency, and perceptual fluency can be mistaken for memory. The data also underscore the role of detailed episodes in lexical access.  相似文献   

11.
Three aspects of voice recognition were investigated in the study reported here: memory for familiar voices, memory for the words spoken, and the relative effects of length and variation in a voice extract on long- and short-term memory. In Experiment 1, recognition memory for the briefly heard voice of a stranger was superior with longer extracts (p<0.01), but increasing vowel variety did not improve performance. This pattern was repeated for short-term memory (p<0.01) in Experiment 2. Scores for the above task correlated significantly (p<0.05) with scores for recognizing well-known voices. In a further test of well-known voice memory in Experiment 3, a weak and non-significant positive correlation (r=0.29) was found between memory for well-known voices and memory for a once-heard voice. Memory for the words spoken did not correlate significantly with memory for the unknown voice itself. The possibilities of a memory-for-voices general ability, and forensic applications of the findings are discussed. © 1997 by John Wiley & Sons, Ltd.  相似文献   

12.
These experiments addressed why, in episodic-memory tests, familiar faces are recognized better than unfamiliar faces. Memory for faces of well-known public figures and unfamiliar persons was tested, not only with old/new recognition tests, in which initially viewed faces were discriminated from dis tractors, but also with tests of memory for specific information. These included: detail recall, in which a masked feature had to be described; orientation recognition, in which discrimination between originally seen faces and mirror-image reversals was required; and recognition and recall of labels for the public figures. Experiments 1 and 2 showed that memory for orientation and featural details was not robustly related either to facial familiarity or to old/new recognition rates. Experiment 3 showed that memory for labels was not the exclusive determinant of the famous-face advantage in recognition, since famous faces were highly recognizable even they were not labelable or when labels were forgotten. These results suggest that the familiarity effect, and face recognition in general, may reflect a nonverbal memory representation that is relatively abstract.  相似文献   

13.
This study examined the relation between selective attention, perception, and memory factors in the generation of auditory asymmetries. Sixty subjects were randomly assigned to one of three dichotic listening groups. One group was presented with paired linguistic stimuli, a second group with dichotic nonverbal material, while a third group heard randomly inteterspersed verbal and nonverbal pairs. Order of ear report was controlled in all three groups. Significant right ear advantages on first and second reports were found in the verbal group, and a similar pattern of left ear advantages was found in the nonverbal group. This ear by material dissociation was only found on second ear reports in the group which heard the randomly interspersed pairs. No first report ear advantages were evident in the latter group. These results are discussed in terms of the independence of perceptual and memory mechanisms in the production of auditory asymmetries.  相似文献   

14.
Dichotic pairs of musical sounds were presented to 16 right-handed subjects, who were instructed to depress a reaction-time (RT) button when a target sound occurred in either ear. Four blocks of 36 trials were presented. During the first block, RTs to left-ear targets were significantly faster than those to right-ear targets. There were no significant ear differences during the second, third, or fourth blocks. Possible explanations for the limited duration of the left-ear advantage, and its implications for models proposed to explain the basis of RT asymmetries, are discussed.  相似文献   

15.
Adaptation to male voices causes a subsequent voice to be perceived as more female, and vice versa. Similar contrastive aftereffects have been reported for phonetic perception, and in vision for face perception. However, while aftereffects in the perception of phonetic features of speech have been reported to persist even when adaptors were processed inattentively, face aftereffects were previously reported to be abolished by inattention to adaptors. Here we demonstrate that auditory aftereffects of adaptation to voice gender are eliminated when the male and female adaptor voices are spatially unattended. Participants simultaneously heard gender-specific male or female adaptor voices in one ear and gender-neutral (androgynous) adaptor voices in the contralateral ear. They selectively attended to the adaptor voices in a designated ear, by either classifying voice gender (Exp. 1) or spoken syllable (Exp. 2). Voice aftereffects were found only if the gender-specific voices were spatially attended, suggesting capacity limits in the processing of voice gender for the unattended ear. Remarkably, gender-specific adaptors in the attended ear elicited comparable aftereffects in test voices, regardless of prior attention to voice gender or phonetic content. Thus, within the attended ear, voice gender was processed even when it was irrelevant for the task at hand, suggesting automatic processing of gender along with linguistic information. Overall, voice gender adaptation requires spatial, but not dimensional, selective attention.  相似文献   

16.
Voices, in addition to faces, enable person identification. Voice recognition has been shown to evoke a distributed network of brain regions that includes, in addition to the superior temporal sulcus (STS), the anterior temporal pole, fusiform face area (FFA), and posterior cingulate gyrus (pCG). Here we report an individual (MS) with acquired prosopagnosia who, despite bilateral damage to much of this network, demonstrates the ability to distinguish voices of several well‐known acquaintances from voices of people that he has never heard before. Functional magnetic resonance imaging (fMRI) revealed that, relative to speech‐modulated noise, voices rated as familiar and unfamiliar by MS elicited enhanced haemodynamic activity in the left angular gyrus, left posterior STS, and posterior midline brain regions, including the retrosplenial cortex and the dorsal pCG. More interestingly, relative to noise and unfamiliar voices, the familiar voices elicited greater haemodynamic activity in the left angular gyrus and medial parietal regions including the dorsal pCG and precuneus. The findings are consistent with theories implicating the pCG in recognizing people who are personally familiar, and furthermore suggest that the pCG region of the voice identification network is able to make functional contributions to voice recognition even though other areas of the network, namely the anterior temporal poles, FFA, and the right parietal lobe, may be compromised.  相似文献   

17.
The present study was designed to examine age differences in the ability to use voice information acquired intentionally (Experiment 1) or incidentally (Experiment 2) as an aid to spoken word identification. Following both implicit and explicit voice learning, participants were asked to identify novel words spoken either by familiar talkers (ones they had been exposed to in the training phase) or by 4 unfamiliar voices. In both experiments, explicit memory for talkers' voices was significantly lower in older than in young listeners. Despite this age-related decline in voice recognition, however, older adults exhibited equivalent, and in some cases greater, benefit than young listeners from having words spoken by familiar talkers. Implications of the findings for age-related changes in explicit versus implicit memory systems are discussed.  相似文献   

18.
We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.  相似文献   

19.
Subjects were monaurally presented with consonant—vowel syllables to the right or left ear, with or without simultaneous noise to the other ear. The subject's task on each trial was to indicate whether or not the item presented was the target item /ba/. A right-ear advantage in reaction time was obtained: 14 msec for target items, 6 msec for nontarget items. The size of this effect was comparable in the presence vs. absence of competing noise. No consistent individual differences were found in the size of the right-ear advantage for this task, although such differences were obtained in a dichotic-perception pretest.It is argued that data of this type do not permit inferences about the use of particular ear/hemisphere neural pathways.  相似文献   

20.
Groups of 2-, 3-, and 4-month olds were tested for dichotic ear differences in memory-based phonetic and music timbre discriminations. A right-ear advantage for speech and a left-ear advantage (LEA) for music were found in the 3- and 4-month-olds. However, the 2-month-olds showed only the music LEA, with no reliable evidence of memory-based speech discrimination by either hemisphere. Thus, the responses of all groups to speech contrasts were different from those to music contrasts, but the pattern of the response dichotomy in the youngest group deviated from that found in the older infants. It is suggested that the quality or use of lefthemisphere phonetic memory may change between 2 and 3 months, and that the engagement of right-hemisphere specialized memory for musical timbre may precede that for left-hemisphere phonetic memory. Several directions for future research are suggested to determine whether infant short-term memory asymmetries for speech and music are attributable to acoustic factors, to different modes or strategies in perception, or to structural and dynamic properties of natural sound sources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号