共查询到20条相似文献,搜索用时 0 毫秒
1.
KARL W. SANDBERG 《Scandinavian journal of psychology》1990,31(4):302-314
A highly systematic relation between the conditional probability of recognition given recall and the overall recognition hit rate has been demonstrated in a wide variety of experiments. A function describing this relationship was developed by Tulving & Wiseman (1975). Exceptions to this function have, in retrospect, been interpreted in terms of (a) a low integration between cue and target items, or (b) a high cue overlap between the two tests involved: recognition and recall. The experiment reported was designed to evaluate the joint and the separate contributions from integration and cue overlap for obtaining conformity with and exception from the Tulving-Wiseman function. In line with the predictions made, the results showed that these two factors in combination can account for data that fit the function and for exceptions above the function. In relative terms, the contribution from integration was somewhat more pronounced than that from cue overlap. 相似文献
2.
The present paper addresses the problems of whether recognition failure of recallable words is a function of both recognition and recall, and whether recognition failure is restricted to a small and specifiable subset of study items. A meta-analysis of the Nilsson—Gardiner database (Nilsson & Gardiner, 1993) showed that recognition given recall was positively correlated with recognition and negatively correlated with recall. Two new experiments are reported, the first one using 48 word pairs for which recognition failure was found in previous studies. An item analysis of the data demonstrated that recognition failure occurred primarily with noun—adjective pairs. The second experiment compared Norwegian—American and American—Norwegian name pairs. Wide deviation from the Tulving—Wiseman function (Tulving & Wiseman, 1975) was observed for the latter condition. In both conditions, recognition failure occurred with only the items for which the beginnings of names shared three or more letters. It is concluded that recognition failure occurs when there exists a relationship between the members of an A—B pair that is independent of their pairing in the study context. The Tulving—Wiseman function is the result of collapsing across items in the analysis of previous studies. 相似文献
3.
In an extension of Muter’s (1978) research, subjects studied pairs of lowercase cues and uppercase targets consisting of famous names (e.g., betsy ROSS), nonfamous names (e.g., edwin CONWAY), weakly related words (e.g., grasp BABY), and unrelated words (e.g., art GO). Following recognition tests in which surname and word targets were tested in the absence of their cues, cued recall tests for the surname and word targets were given. In semantic recognition and recall tests, the response to a surname was to be made solely on the basis of its fame, regardless of whether or not it had appeared in the study list. In episodic memory tests, the response to a surname was to be made solely on the basis of whether or not it had appeared in the study list, regardless of its fame. In all tests, the response to a nonname was to be made solely on the basis of whether or not it had appeared in the study list. The Tulving-Wiseman (1975) function accurately predicted recognition failure rates for famous surnames, whether or not they were from the study list and whether the test was episodic or semantic, and for targets from the weakly relatedword pairs. However, recognition failure rates were lower than the Tulving-Wiseman function predicted for nonfamous surnames in the episodic memory test and for targets from unrelated word pairs. Discussion focused on these results’ implications for the nature of the Tulving-Wiseman function and the psychological reality of the episodic-semantic memory distinction. 相似文献
4.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed. 相似文献
5.
N. Virji-Babul A. Moiseev W. Sun T. Feng N. Moiseeva K.J. Watt M. Huotilainen 《Brain and cognition》2013
The brain mechanisms that subserve music recognition remain unclear despite increasing interest in this process. Here we report the results of a magnetoencephalography experiment to determine the temporal dynamics and spatial distribution of brain regions activated during listening to a familiar and unfamiliar instrumental melody in control adults and adults with Down syndrome (DS). In the control group, listening to the familiar melody relative to the unfamiliar melody, revealed early and significant activations in the left primary auditory cortex, followed by activity in the limbic and sensory-motor regions and finally, activation in the motor related areas. In the DS group, listening to the familiar melody relative to the unfamiliar melody revealed increased significant activations in only three regions. Activity began in the left primary auditory cortex and the superior temporal gyrus and was followed by enhanced activity in the right precentral gyrus. These data suggest that familiar music is associated with auditory–motor coupling but does not activate brain areas involved in emotional processing in DS. These findings reveal new insights on the neural basis of music perception in DS as well as the temporal course of neural activity in control adults. 相似文献
6.
In comparison with other modalities, the recognition of emotion in music has received little attention. An unexplored question is whether and how emotion recognition in music changes as a function of ageing. In the present study, healthy adults aged between 17 and 84 years (N=114) judged the magnitude to which a set of musical excerpts (Vieillard et al., 2008) expressed happiness, peacefulness, sadness and fear/threat. The results revealed emotion-specific age-related changes: advancing age was associated with a gradual decrease in responsiveness to sad and scary music from middle age onwards, whereas the recognition of happiness and peacefulness, both positive emotional qualities, remained stable from young adulthood to older age. Additionally, the number of years of music training was associated with more accurate categorisation of the musical emotions examined here. We argue that these findings are consistent with two accounts on how ageing might influence the recognition of emotions: motivational changes towards positivity and, to a lesser extent, selective neuropsychological decline. 相似文献
7.
Paul Muter 《Memory & cognition》1978,6(1):9-12
In an experiment in which there was no study phase, 54 subjects were tested for recognition of famous surnames and then were tested for cued recall of the same surnames. Subjects failed to recognize 53.4% of names that they subsequently recalled. Recall was significantly higher than recognition. The relationship between overall recognition rate and recognition rate of recallable words closely resembled that reported by Tulving and Wiseman (1975) for episodic memory experiments. The present data therefore extend the generality of this relationship, and of the principle that the probability of retrieval from memory depends critically on the cues provided. It is argued that the similarity between results for episodic memory experiments and the present semantic memory experiment can be more parsimoniously accommodated by tagging theory than by episodic theory. 相似文献
8.
KARL SANDBERG 《Scandinavian journal of psychology》1988,29(3-4):129-136
The experiment reported was conducted with the purpose of studying whether the phenomenon of recognition failure of recallable words would hold for a paradigm which involves a free recall test rather than a cued recall test used in previous research. The 3×2 design used comprised two between-subjects factors: the subjects were instructed that a recognition test, a cued recall test, or a free recall test would follow the study trial, but the actual test sequence given was recognition followed by cued recall, or recognition followed by free recall. The results demonstrate that cases of recognition failure of recallable words do occur in all six conditions, but the amount of recognition failure for the recognition-free recall test sequence was less than that predicted from the Tulving & Wiseman (1975) function. In line with previous research the data for the recognition-cued recall test sequence showed the amount of recognition failure that was predicted by this function. 相似文献
9.
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding. 相似文献
10.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli
consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion
intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar
(anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition
of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases.
Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly
lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences
were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual
emotions in studies on age-related differences in emotion recognition. 相似文献
11.
Mirjam J. van Tricht Harriet M.M. SmedingJohannes D. Speelman Ben A. Schmand 《Brain and cognition》2010
Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson’s disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction and other disease characteristics in emotion recognition was also evaluated. 相似文献
12.
KENNETH JUNGE 《Scandinavian journal of psychology》1996,37(2):172-182
The graph space of P ( RN | RL ) vs. P ( RN ), probabilities of recognition given recall and overall recognition, is the setting for the Tulving-Wiseman (TW) function of recognition failure research. According to Hintzman (1991, 1992), the moderate scatter of data points about the TW curve is an artefactual regularity caused by a mathematical constraint when P ( RN ) < P ( RL ). However, both constrained and unconstrained (when P ( RN ) ≥ P ( RL )) points conform equally well to the TW function, consistent with the unobserved fact that the location of both kinds of points is determined by the same mathematical rule. Hintzman's claim that there is no regularity in the data plot when P ( RN ) < P ( RL ) other than that produced by the constraint is not supported by this study. He based his claim on an incorrect use of the measure of dependence (association) called gamma. The graph space corresponding to gamma is that of P ( RN | RL ) vs. P ( RN | nRL ), as shown by using the Bayes function (Bayes' theorem). The margin-free measure gamma is a function of two thetas, theta being a margin-sensitive measure of dependence that is the parameter of the Bayes function. The variance of gamma reflects the fact that it is compounded of the theta variances, so a margin-free measure is obtained at the expense of greater variability. 相似文献
13.
The phenomenon of recognition failure of recallable words shows a remarkable regularity across a wide variety of experimental conditions. A quadratric function, referred to as the Tulving-Wiseman function, summarizes this regularity. A few cases of deviation from this function have been identified and classified into two categories of exceptions to this function. An experiment was designed to deal with one of these categories, namely the exception that occurs because of poor integration between cue and target information of studied word pairs. An index based on confidence ratings of recall responses was developed to assess variability in integration. Poor integration was demonstrated especially for one presentation of low associative word pairs, and significant deviations from the function was obtained for this condition. Hintzman's (1991, 1992) hypothesis about mathematical constraints of the Tulving-Wiseman function was discussed and refuted. Finally, an interpretation of negative deviations from the Tulving-Wiseman function was proposed. 相似文献
14.
Zarlino, one of the most important music theorists of the XVI century, described the minor consonances as 'sweet' (dolci) and 'soft' (soavi) (Zarlino 1558/1983, in On the Modes New Haven, CT: Yale University Press, 1983). Hector Berlioz, in his Treatise on Modern Instrumentation and Orchestration (London: Novello, 1855), speaks about the 'small acid-sweet voice' of the oboe. In line with this tradition of describing musical concepts in terms of taste words, recent empirical studies have found reliable associations between taste perception and low-level sound and musical parameters, like pitch and phonetic features. Here we investigated whether taste words elicited consistent musical representations by asking trained musicians to improvise on the basis of the four canonical taste words: sweet, sour, bitter, and salty. Our results showed that, even in free improvisation, taste words elicited very reliable and consistent musical patterns:'bitter' improvisations are low-pitched and legato (without interruption between notes), 'salty' improvisations are staccato (notes sharply detached from each other), 'sour' improvisations are high-pitched and dissonant, and 'sweet' improvisations are consonant, slow, and soft. Interestingly, projections of the improvisations of taste words to musical space (a vector space defined by relevant musical parameters) revealed that, in musical space, improvisations based on different taste words were nearly orthogonal or opposite. Decoding methods could classify binary choices of improvisations (i.e., identify the improvisation word from the melody) at performance of around 80%--well above chance. In a second experiment we investigated the mapping from perception of music to taste words. Fifty-seven non-musical experts listened to a fraction of the improvisations. We found that listeners classified with high performance the taste word which had elicited the improvisation. Our results, furthermore, show that associations of taste and music go beyond basic sensory attributes into the domain of semantics, and open a new venue of investigation to understand the origins of these consistent taste-musical patterns. 相似文献
15.
16.
17.
Tempo is one factor that is frequently associated with the expressive nature of a piece of music. Composers often indicate the tempo of a piece of music through the use of numerical markings (beats min(-1)) and subjective terms (adagio, allegro). Three studies were conducted to assess whether listeners were able to make consistent judgments about tempo that varied from piece to piece. Listeners heard short extracts of Scottish music played at a range of tempi and were asked to make a two-alternative forced choice of "too fast" or "too slow" for each extract. The responses for each study were plotted as proportion of too fast responses as a function of tempo for each piece, and cumulative normal curves were fitted to each data set. The point where these curves cross 0.5 is the tempo at which the music sounds right to the listeners, referred to as the optimal tempo. The results from each study show that listeners are capable of making consistent tempo judgments and that the optimal tempo varies across extracts. The results also revealed that rhythm plays a role, but not the only role in making temporal judgments. 相似文献
18.
H G Wallbott 《Zeitschrift für experimentelle und angewandte Psychologie》1989,36(1):138-161
An attempt is made to study the impact of visual information on the perception of music by employing (rock) music videos as stimuli. Forty music videos were presented to judges, who either saw the video or only heard the respective pieces of music. They had to judge the emotions conveyed via the piece of music/video on scales and their overall impression on semantic differential scales. Results indicate that visual information as presented in music videos has considerable effects on impressions: When pieces of music are presented as music videos, more positive emotions are attributed, while presentation of the pieces of music alone resulted in more negative emotion attributions. Thus, music videos seem to "euphorize" the recipient. Furthermore, video presentation compared to presentation of music alone evoked more intense "complexity/interest" as well as "activity" judgments, while "evaluation" judgments were not influenced by the medium of presentation. In addition, a number of presentation factors (like the speed of the music or the number of cuts in the videos) do influence impressions. This leads to the conclusion that researchers should pay more attention to such microcharacteristics of stimuli. In general, the effects found to be due to the medium of presentation are independent of the effects on judgments due to content (i.e., presentation factors). 相似文献
19.
A priming technique was employed to study the relations between melody and lyrics in song memory. The procedure involved the auditory presentation of a prime and a target taken from the same song, or from unrelated but equally familiar songs. To promote access to memory representations of songs, we varied the format of primes and targets, which were either spoken or sung, using the syllable /1a/. In each of the four experiments, a prime taken from the same song as the target facilitated target recognition, independently of the format in which it occurred. The facilitation effects were also found in conditions close to masked priming because prime recognizability was very low, as assessed in Experiment 1 by d' measures. Above all, backward priming effects were observed in Experiments 2, 3, and 4, where song order was reversed in the prime-target sequence, suggesting that words and tones of songs are not connected by strict temporal contingencies. Rather, the results indicate that, in song memory, text and tune are related by tight connections that are bidirectional and automatically activated by relatively abstract information. Rhythmic similarity between linguistic stress pattern and musical meter might account for these priming effects. 相似文献
20.