首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A hypothesis based on the psycholinguistic derivation of sentences was tested. The task required that sentences temporarily stored in memory be transformed and spoken with delayed auditory feedback. An instruction to repeat the stimulus sentence verbatim or grammatically transform it was placed either immediately before or immediately after the sentence. Two measures of interference were obtained based on the syllable rates for the response times. The different grammatical forms showed significant variation in the amount of interference, but the amount of interference appeared to relate primarily to the length of the transformed sentence rather than to its derivational complexity. Interaction between the position of the instruction and the sentence generation task is discussed in terms of the underlying memory functions involved. A further test was proposed.  相似文献   

2.
Primitive processes involved in auditory stream formation are measured with indirect, objective method. A target melody interleaved with a distractor sequence is followed by a probe melody that was identical to the target or differed by 2 notes. Listeners decided whether the probe melody was present or not in the composite sequence. Interleaved melody recognition is not possible when distractor sequences have the same mean frequency and maximum contour crossover with target melodies. Performance increases with mean frequency separation and timbral dissimilarity and is unaffected by the duration of the silent interval between composite sequence and probe melody. The relation between this indirect task measuring the interleaved melody recognition boundary and direct judgments measuring the fission boundary is discussed.  相似文献   

3.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed.  相似文献   

4.
Three experiments were conducted to study motor programs used by expert singers to produce short tonal melodies. Each experiment involved a response-priming procedure in which singers prepared to sing a primary melody but on 50% of trials had to switch and sing a different (secondary) melody instead. In Experiment 1, secondary melodies in the same key as the primary melody were easier to produce than secondary melodies in a different key. Experiment 2 showed that it was the initial note rather than key per se that affected production of secondary melodies. In Experiment 3, secondary melodies involving exact transpositions were easier to sing than secondary melodies with a different contour than the primary melody. Also, switches between the keys of C and G were easier than those between C and E. Taken together, these results suggest that the initial note of a melody may be the most important element in the motor program, that key is represented in a hierarchical form, and that melodic contour is represented as a series of exact semitone offsets.  相似文献   

5.
Unlike the visual stimuli used in most object identification experiments, melodies are organized temporally rather than spatially. Therefore, they may be particularly sensitive to manipulations of the order in which information is revealed. Two experiments examined whether the initial elements of a melody are differentially important for identification. Initial exposures to impoverished versions of a melody significantly decreased subsequent identification, especially when the early exposures did not include the initial notes of the melody. Analyses of the initial notes indicated that they are differentially important for melody identification because they help the listener detect the overall structure of the melody. Confusion errors tended to be songs that either were drawn from the same genre or shared similar phrasing. These data indicate that conceptual processing influences melody identification, that phrase-level information is used to organize melodies in semantic memory, and that phrase-level information is required to effectively search semantic memory.  相似文献   

6.
Three processes have been identified as central to object identification: top-down processing, bottom-up processing, and lateral competition. Six experiments using the perceptual interference paradigm were conducted to assess the relative contributions of these three processes to melody identification. Significant interference was observed only when the target and the distracting information were difficult to distinguish both perceptually and conceptually. Lateral competition-the activation of specific distractor melodies--did not influence the magnitude of interference observed. These results suggest that bottom-up and top-down processes contribute more to melody identification than does lateral competition. The data are discussed in terms of the broader literature on object identification and the relationship between identifying melodies, spoken words, and visual objects.  相似文献   

7.
Bimodal format effects in working memory   总被引:1,自引:0,他引:1  
This work combines presentation formats to test whether bimodal conditions offer advantages or disadvantages relative to single formats in working memory performance. A dual task that included recall of 3 or 6 items while verifying the accuracy of math sentences was used in 2 experiments. When comparisons were made between single- and dual-format conditions, there was an advantage for items presented as spoken words and pictures simultaneously and individually. In Experiment 2, dual-format conditions had incongruent information, and spoken words were found to interfere with recall of long sequences of pictures and printed words. The findings suggest that when dual-format items are the same, there are some performance advantages when spoken words are combined with pictures or printed words. When the dual formats are displaying different items, however, spoken words are a more powerful distractor than pictures or printed words, and verbal and visual short-term stores can demonstrate similar susceptibility to distractor interference.  相似文献   

8.
Effects of presentation modality and response format were investigated using visual and auditory versions of the word stem completion task. Study presentation conditions (visual, auditory, non-studied) were manipulated within participants, while test conditions (visual/written, visual/spoken, auditory/written, auditory/spoken, recall-only) were manipulated between participants. Results showed evidence for same modality and cross modality priming on all four word stem completion tasks. Words from the visual study list led to comparable levels of priming across all test conditions. In contrast, words from the auditory study list led to relatively low levels of priming in the visual/written test condition and high levels of priming in the auditory/spoken test condition. Response format was found to influence priming performance following auditory study in particular. The findings confirm and extend previous research and suggest that, for implicit memory studies that require auditory presentation, it may be especially beneficial to use spoken rather than written responses.  相似文献   

9.
This experiment investigates the possibility that two different kinds of imagery codes are used in sentence memory, one involving moving images (kinetic imagery) and the other involving stationary images (static imagery). Using a modality-specific interference task it was shown that only sentences involving kinetic imagery were affected by the visual interference task; neither static nor low imagery sentences were so affected. The results are interpreted as showing that some kind of imaginal code is used in memory, but that there are different kinds of code available. It is claimed that this result is inconsistent both with Paivio's (1971) ‘dual-coding’ hypothesis and with propositional accounts of sentence memory.  相似文献   

10.
Two experiments investigated participants’ recognition memory for word content, while varying vocal characteristics, and for vocal characteristics alone. In Experiment 1, participants performed an auditory recognition task in which they identified whether a spoken word was “new”, “old” (repeated word, repeated voice), or “similar” (repeated word, new voice). Results showed that word recognition accuracy was lower for similar trials than old trials. In Experiment 2, participants performed an auditory recognition task in which they identified whether or not a phrase was spoken in an old or new voice, with repetitions occurring after a variable number of intervening stimuli. Results showed that recognition accuracy was lower when old voices spoke an alternate message than a repeated message and accuracy decreased as a function of number of intervening items. Overall, the results suggest that speech recognition is better for lexical content than vocal characteristics alone.  相似文献   

11.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

12.
Past research has suggested that the disruptive effect of altered auditory feedback depends on how structurally similar the sequence of feedback events is to the planned sequence of actions. Three experiments pursued one basis for similarity in musical keyboard performance: matches between sequential transitions in spatial targets for movements and the melodic contour of auditory feedback. Trained pianists and musically untrained persons produced simple tonal melodies on a keyboard while hearing feedback sequences that either matched the planned melody or were contour-preserving variations of that melody. Sequence production was disrupted among pianists when feedback events were serially shifted by one event, similarly for shifts of planned melodies and tonal variations but less so for shifts of atonal variations. Nonpianists were less likely to be disrupted by serial shifts of variations but showed similar disruption to pianists for shifts of the planned melody. Thus, transitional properties and tonal schemata may jointly determine perception-action similarity during musical sequence production, and the tendency to generalize from a planned sequence to variations of it may develop with the acquisition of skill.  相似文献   

13.
This study investigated the effects of imagining speaking aloud, sensorimotor feedback, and auditory feedback on respondents' reports of having spoken aloud and examined the relationship between responses to “spoken aloud” in the reality-monitoring task and the sense of agency over speech. After speaking aloud, lip-synching, or imagining speaking, participants were asked whether each word had actually been spoken. The number of endorsements of “spoken aloud” was higher for words spoken aloud than for those lip-synched and higher for words lip-synched than for those imagined as having been spoken aloud. When participants were prevented by white noise from receiving auditory feedback, the discriminability of words spoken aloud decreased, and when auditory feedback was altered, reports of having spoken aloud decreased even though participants had actually done so. It was also found that those who have had auditory hallucination-like experiences were less able than were those without such experiences to discriminate the words spoken aloud, suggesting that endorsements of having “spoken aloud” in the reality-monitoring task reflected a sense of agency over speech. These results were explained in terms of the source-monitoring framework, and we proposed a revised forward model of speech in order to investigate auditory hallucinations.  相似文献   

14.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

15.
Four experiments examined the effects of encoding multiple standards in a temporal generalization task in the visual and auditory modalities both singly and cross-modally, using stimulus durations ranging, across different experiments, from 100 to 1,400 ms. Previous work has shown that encoding and storing multiple auditory standards of different durations resulted in systematic interference with the memory of the standard, characterized by a shift in the location of peak responding, and this result, from Ogden, Wearden, and Jones (2008), was replicated in the present Experiment 1. Experiment 2 employed the basic procedure of Ogden et al. using visual stimuli and found that encoding multiple visual standards did not lead to performance deterioration or any evidence of systematic interference between the standards. Experiments 3 and 4 examined potential cross-modal interference. When two standards of different modalities and durations were encoded and stored together there was also no evidence of interference between the two. Taken together, these results, and those of Ogden et al., suggest that, in humans, visual temporal reference memory may be more permanent than auditory reference memory and that auditory temporal information and visual temporal information do not mutually interfere in reference memory.  相似文献   

16.
Children's perception of scale and contour in melodies was investigated in five studies. Experimental tasks included judging transposed renditions of melodies (Studies 1 and 3), discriminating between transposed renditions of a melody (Study 2), judging contour-preserving transformations of melodies (Study 4), and judging similarity to a familiar target melody of transformations preserving rhythm or rhythm and contour (Study 5). The first and second studies showed that young children detect key transposition changes even in familiar melodies and they perceive similarity over key transpositions even in unfamiliar melodies. Young children also are sensitive to melodic contour over transformations that preserve it (Study 5), yet they distinguish spontaneously between melodies with the same contour and different intervals (Study 4). The key distance effect reported in the literature did not occur in the tasks of this investigation (Studies 1 and 3), and it may be apparent only for melodies shorter or more impoverished than those used here.  相似文献   

17.
A recent study using a crossmodal matching task showed that the identity of a talker could be recognized even when the auditory and visual stimuli that were being matched were different sentences spoken by the talker. This finding implies that general temporal features of a person's speech are shared across the auditory and visual modalities.  相似文献   

18.
This study used a dual-task paradigm to analyze the time course of motor resonance during the comprehension of action language. In the study, participants read sentences describing a transfer either away from (“I threw the tennis ball to my rival”) or toward themselves (“My rival threw me the tennis ball”). When the transfer verb appeared on the screen, and after a variable stimulus onset asynchrony (SOA), a visual motion cue (Experiment 1) or a static cue (Experiment 2) prompted participants to move their hand either away from or toward themselves to press a button. The results showed meaning–action interference at short SOAs and facilitation at the longest SOA for the matching conditions. These results support the hypothesis that motor processes associated with the comprehension of action-related language interfere with an overlapping motor task, whereas they facilitate a delayed motor task. These effects are discussed in terms of resonance processes in the motor cortex.  相似文献   

19.
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners’ auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners’ McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.  相似文献   

20.
Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound–picture association. Melodic contour—a musically relevant property—and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different‐contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool‐aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号