首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three experiments are reported involving the presentation of lists of either letters or digits for immediate serial recall. The main variable was the presence or absence of a suffix-prefix, an item (tick or cross) occurring at the end of the list which had to be copied before recall of the stimulus list. With auditory stimuli and an auditory suffix-prefix there was a large and selective increase in the number of errors on the last few serial positions—the typical “suffix effect”. The suffix effect was not found with auditory stimuli and a visual suffix-prefix nor with a visual stimulus and an auditory suffix-prefix. These results are interpreted as supporting a model for short-term memory proposed by Crowder and Morton (1969) in which it is suggested that with serial recall information concerning the final items following auditory presentation has a different, precategorical, origin from that concerning other items.  相似文献   

2.
Experiments were carried out on conditions affecting the successful recall of simple, non-verbalised auditory stimuli. The effectiveness of recall of a given stimulus was measured by the ability to assess the pitch of one stimulus as compared to another stimulus presented after an interval of given length. The results indicate that the storage in memory of a simple auditory stimulus is possible, even in the case of pairs of stimuli separated in time by more than 5 min. In all experiments (stimuli differing by 2 semitones or by one semitone, and equal in intensity, or differing by ± 25 dB) the curve of errors shows a sharp increase when the interval between the two stimuli is 80 sec. It is possible that this sudden deterioration in the effectiveness of recall is connected with same alteration of the mechanism of memory. It is postulated that this alteration is due to a “switch-over” from immediate memory to short-term memory. The analysis of the errors shows that in certain circumstances there is a tendency towards a marked preponderance of errors resulting from underestimation rather than from overestimation of the first stimulus. This preponderance is obtained when we have pairs of equal loudness and it is even more marked when the first stimulus is softer than the second, and it is decreases when the first stimulus is louder than the second. These results suggest that in differentiating pitches of stimuli (in the 700–2000 Hz band) presented one after the other at certain fixed intervals of time, we are to find a phenomenon analogous to the classic time error found in estimations of loudness.  相似文献   

3.
Music provides a useful domain in which to study how the different attributes of complex multidimensional stimuli are processed both separately and in combination. Much research has been devoted to addressing how the dimension of pitch and time are co-processed in music listening tasks. Neuropsychological studies have provided evidence for a certain degree of independence between pitch and temporal processing, although there are also many experimental reports favouring interactive models of pitch and temporal processing. Here we extended these investigations by examining the processing of pitch and temporal structures when music is presented in the visual modality (i.e. in the form of music notation). In two experiments, musician subjects were presented with visual musical stimuli containing both pitch and temporal information for a brief amount of time, and they were subsequently required to recall both the pitch and temporal information. In Experiment 1, we documented that concurrent, unattended, pitch and rhythmic auditory interference stimuli disrupted the recall of pitch, but not time. In Experiment 2, we showed that manipulating the tonal structure of the visual presentation stimuli affected the recall of pitch, but not time. On the other hand, manipulating the metrical properties of the visual stimuli affected recall of time, and pitch to a certain extent. Taken together, these results suggest that the processing of pitch is constrained by the processing of time, but the processing of time is not affected by the processing of pitch. These results do not support either strong independence or interactive models of pitch and temporal processing, but they suggest that the processing of time can occur independently from the processing of pitch when performing a written recall task.  相似文献   

4.
The present study used a temporal bisection task to investigate whether music affects time estimation differently from a matched auditory neutral stimulus, and whether the emotional valence of the musical stimuli (i.e., sad vs. happy music) modulates this effect. The results showed that, compared to sine wave control music, music presented in a major (happy) or a minor (sad) key shifted the bisection function toward the right, thus increasing the bisection point value (point of subjective equality). This indicates that the duration of a melody is judged shorter than that of a non-melodic control stimulus, thus confirming that “time flies” when we listen to music. Nevertheless, sensitivity to time was similar for all the auditory stimuli. Furthermore, the temporal bisection functions did not differ as a function of musical mode.  相似文献   

5.
A number of explanations for the modality effect in immediate serial recall have been proposed. The auditory advantage for recall of recency items has been explained in terms of (1) the contributions of precategorical acoustic storage (PAS), (2) an advantage of changing-state over static stimuli, and (3) an advantage of primary-linguistic coding. Four experiments were conducted to evaluate these hypotheses. In the first, subjects viewed seven consecutive rectangles of different colors on a computer monitor. A small recency effect was obtained when the task was to recall the colors of the rectangles in order, with the size of the effect being independent of whether the rectangles remained stationary on the screen or moved in one of four directions. However, when the task was to recall the direction of movement of the rectangles, a larger recency effect was found. This pattern of results was interpreted as suggesting that recency effects are enhanced by changing-state stimulus information, but only when the changing-state information serves to identify the stimulus. Experiments 2 and 3 provided converging evidence by demonstrating an analogous recency advantage for changing-state visual stimuli that were somewhat different from those of Experiment 1. Experiment 4 demonstrated recency effects with synthesized speech stimuli that were substantially greater than were those found with the changing-state visual stimuli of the first three experiments. Implications of the results for the PAS, changing-state, and primary-linguistic hypotheses, as well as temporal-distinctiveness theories of recency, are discussed.  相似文献   

6.
采用Oddball范式考察听视跨感觉通道和听感觉通道中“无关刺激”或“无关属性”的信号提示对新异干扰的影响。实验1和实验2分别比较了听视跨通道和听通道中时间事件提示、时间提示和事件提示三种条件。结果表明:(1)时间提示条件下新异干扰效应消失, 而事件提示条件下新异干扰效应仍存在。无关刺激意外变化引起的靶任务行为受损, 不仅与偏差刺激的小概率和新颖性有关, 而且与干扰刺激与靶子间提示信号关联有关, 这种提示关联中事件发生的提示功能比时间间隔的提示功能更为重要。(2)听通道下新异干扰与视听跨通道结果一致, 无关属性也会起到提示信号作用, 同样事件提示功能比时间提示功能更重要。  相似文献   

7.
Priming is a useful tool for ascertaining the circumstances under which previous experiences influence behavior. Previously, using hierarchical stimuli, we demonstrated (Justus & List, 2005) that selectively attending to one temporal scale of an auditory stimulus improved subsequent attention to a repeated (vs. changed) temporal scale; that is, we demonstrated intertrial auditory temporal level priming. Here, we have extended those results to address whether level priming relied on absolute or relative temporal information. Both relative and absolute temporal information are important in auditory perception: Speech and music can be recognized over various temporal scales but become uninterpretable to a listener when presented too quickly or slowly. We first confirmed that temporal level priming generalized over new temporal scales. Second, in the context of multiple temporal scales, we found that temporal level priming operates predominantly on the basis of relative, rather than absolute, temporal information. These findings are discussed in the context of expectancies and relational invariance in audition.  相似文献   

8.
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as inferior frontal gyrus (IFG) and motor cortices, even in the absence of an explicit task. To investigate this, we applied spectral mixes of a flute sound and either vowels or specific music instrument sounds (e.g. trumpet) in an fMRI study, in combination with three different instructions. The instructions either revealed no information about stimulus features, or explicit information about either the music instrument or the vowel features. The results demonstrated that, besides an involvement of posterior temporal areas, stimulus expectancy modulated in particular a network comprising IFG and premotor cortices during this passive listening task.  相似文献   

9.
Walking to a pacing stimulus has proven useful in motor rehabilitation, and it has been suggested that spontaneous synchronization could be preferable to intentional synchronization. But it is still unclear if the paced walking effect can occur spontaneously, or if intentionality plays a role. The aim of this work is to analyze the effect of sound pacing on gait with and without instruction to synchronize, and with different rhythmic auditory cues, while walking on a treadmill.Firstly, the baseline step frequency while walking on a treadmill was determined for all participants, followed by experimental sessions with both music and footstep sound cues. Participants were split into two groups, with one being instructed to synchronize their gait to the auditory stimuli, and the other being simply told to walk. Individual auditory cues were generated for each participant: for each trial, cues were provided at the participant’s baseline walking frequency, at 5% and 10% above baseline, and at 5% and 10% below baseline.This study’s major finding was the role of intention on synchronization, given that only the instructed group synchronized their gait with the auditory cues. No differences were found between the effects of step or music stimuli on step frequency.In conclusion, without intention or cues that direct the individual’s attention, spontaneous gait synchronization does not occur during treadmill walking.  相似文献   

10.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   

11.
Two studies investigated the effects of same-modality interference on the immediate serial recall of auditorily and visually presented stimuli. Typically, research in which this task is used has been conducted in quiet rooms, excluding auditory information that is extraneous to the auditorily presented stimuli. However, visual information such as background items clearly within the subject's view have not been excluded during visual presentation. Therefore, in both the present studies, the authors used procedures that eliminated extra-list visual interference and introduced extra-list auditory interference. When same-modality interference was eliminated, weak visual recency effects were found, but they were smaller than those that were generated by auditorily presented items. Further, mid-list and end-of-list recall of visually presented stimuli was unaffected by the amount of interfering visual information. On the other hand, the introduction of auditory interference increased mid-list recall of auditory stimuli. The results of Experiment 2 showed that the mid-list effect occurred with a moderate, but not with a minimal or maximal, level of auditory interference, indicating that moderate amounts of auditory interference had an alerting effect that is not present in typical visual interference.  相似文献   

12.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.  相似文献   

13.
The primary linguistic theory of Shand and Klima (1981) hypothesizes that stimuli that cannot be directly processed without recoding are not in the primary linguistic mode of the subject and thus should lead to lesser recency and associated suffix effects. In three experiments, different normal hearing subjects learned to pair American Sign Language (ASL) stimuli, visual "quasivocables" (QVs), word-like letter strings, and auditory QVs with common English words. In the first experiment, the subjects were given sequences of ASL or QV stimuli and required to recall the associated words in strict serial order. In two other experiments involving auditory and visual presentation, respectively, subjects who had never been given paired associate training were required to recall the English words that had previously been associated with the ASL and QV stimuli, in a standard suffix paradigm. The results showed recency and suffix effects to be present only with auditorily presented QVs and words. Contrary to the predictions of the primary linguistic hypothesis, greater recency and larger suffix effects were present with the auditory QVs than with the auditory words, although the QVs were not primary linguistic and the task involved forced recoding. Previous results showing recency with ASL stimuli in normal subjects were not replicated. It is concluded that recency and suffix effects are not related either to the primary linguistic mode of the subject or to stimulus recoding, as we and Shand and Klima have defined them.  相似文献   

14.
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music–dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant’s own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant’s own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one’s own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.  相似文献   

15.
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.  相似文献   

16.
Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes.  相似文献   

17.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

18.
Seven experiments studied whether irrelevant visual stimuli (stimulus suffixes) would interfere with immediate serial recall of supraspan lists of digits presented visually. Across experiments a wide number of conditions were run, varying in method of presentation (sequential or simultaneous), rate of list presentation, and presence or absence of articulatory suppression. In no condition did a visual suffix have a significant detrimental effect on recall. These results stand in marked contrast to those found when auditory lists and suffixes have been used.  相似文献   

19.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

20.
Experiment I was conducted to investigate memory in a concept identification (CI) problem as a function of the number of trials that precede the recall task. It was found that the recall performance on the initial trials of CI problems was quite good, but declined rapidly when the recall test was given on later trials. It was pointed out that the bulk of the solutions to CI problems are obtained by Ss during the initial trials where an S has good recall for past stimuli. In Experiment II, recall by Ss in a normal CI problem was compared to the recall by Ss in an incidental learning control group. As the performance of Ss in a normal CI problem was significantly better, it was concluded that Ss actively try to store and retain information during their search for a solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号