首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In four experiments, we examined the role of auditory transients and auditory short-term memory in perceiving changes in a complex auditory scene comprising multiple auditory objects. Participants were presented pairs of complex auditory scenes that were composed of a maximum of four animal calls delivered in free field; participants were instructed to decide whether the two scenes were the same or different (Experiments 1, 2, and 4). Changes to the second scene consisted of either the addition or the deletion of one animal call. Contrary to intuitive predictions based on results from the visual change blindness literature, substantial deafness to the change emerged without regard to whether the scenes were separated by 500 msec of masking white noise or by 500 msec of silence (Experiment 1). In fact, change deafness was not even modulated by having the two scenes presented contiguously (i.e., 0-msec interval) or separated by 500 msec of silence (Experiments 2 and 4). This result suggests that change-related auditory transients played little or no role in change detection in complex auditory scenes. Instead, the main determinant of auditory change perception (and auditory change deafness) appears to have been the capacity of auditory short-term memory (Experiments 3 and 4). Taken together, these findings indicate that the intuitive parallels between visual and auditory change perception should be reconsidered.  相似文献   

2.
Recognition memory was investigated for individual frames extracted from temporally continuous, visually rich film segments of 5–15 min. Participants viewed a short clip from a film in either a coherent or a jumbled order, followed by a recognition test of studied frames. Foils came either from an earlier or a later part of the film (Experiment 1) or from deleted segments selected from random cuts of varying duration (0.5 to 30?s) within the film itself (Experiment 2). When the foils came from an earlier or later part of the film (Experiment 1), recognition was excellent, with the hit rate far exceeding the false-alarm rate (.78 vs. 18). In Experiment 2, recognition was far worse, with the hit rate (.76) exceeding the false-alarm rate only for foils drawn from the longest cuts (15 and 30?s) and matching the false-alarm rate for the 5?s segments. When the foils were drawn from the briefest cuts (0.5 and 1.0 s), the false-alarm rate exceeded the hit rate. Unexpectedly, jumbling had no effect on recognition in either experiment. These results are consistent with the view that memory for complex visually temporal events is excellent, with the integrity unperturbed by disruption of the global structure of the visual stream. Disruption of memory was observed only when foils were drawn from embedded segments of duration less than 5?s, an outcome consistent with the view that memory at these shortest durations are consolidated with expectations drawn from the previous stream.  相似文献   

3.
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound–Sound, Word–Sound, Sound–Word and Word–Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words.  相似文献   

4.
Memory for the positions of objects in natural scenes was investigated. Participants viewed an image of a real-world scene (preview scene), followed by a target object in isolation (target probe), followed by a blank screen with a mouse cursor. Participants estimated the position of the target using the mouse. Three conditions were compared. In the target present preview condition, the target object was present in the scene preview. In the target absent preview condition, the target object not present in the scene preview. In the no preview condition, no preview scene was displayed. Localization accuracy in the target present preview condition was reliably higher than that in the target absent preview condition, which was reliably higher than localization accuracy in the no preview condition. These data demonstrate that participants can remember both the spatial context of a scene and the specific positions of local objects.  相似文献   

5.
Most conceptions of episodic memory hold that reinstatement of encoding operations is essential for retrieval success, but the specific mechanisms of retrieval reinstatement are not well understood. In three experiments, we used saccadic eye movements as a window for examining reinstatement in scene recognition. In Experiment 1, participants viewed complex scenes, while number of study fixations was controlled by using a gaze-contingent paradigm. In Experiment 2, effects of stimulus saliency were minimized by directing participants' eye movements during study. At test, participants made remember/know judgments for each recognized stimulus scene. Both experiments showed that remember responses were associated with more consistent study-test fixations than false rejections (Experiments 1 and 2) and know responses (Experiment 2). In Experiment 3, we examined the causal role of gaze consistency on retrieval by manipulating participants' expectations during recognition. After studying name and scene pairs, each test scene was preceded by the same or different name as during study. Participants made more consistent eye movements following a matching, rather than mismatching, scene name. Taken together, these findings suggest that explicit recollection is a function of perceptual reconstruction and that event memory influences gaze control in this active reconstruction process.  相似文献   

6.
7.
In four experiments we explored the accuracy of memory for human action using displays with continuous motion. In Experiment 1, a desktop virtual environment was used to visually simulate ego‐motion in depth, as would be experienced by a passenger in a car. Using a task very similar to that employed in typical studies of representational momentum we probed the accuracy of memory for an instantaneous point in space/time, finding a consistent bias for future locations. In Experiment 2, we used the same virtual environment to introduce a new “interruption” paradigm in which the sensitivity to displacements during a continuous event could be assessed. Thresholds for detecting displacements in ego‐position in the direction of motion were significantly higher than those opposite the direction of motion. In Experiments 3 and 4 we extended previous work that has shown anticipation effects for frozen action photographs or isolated human figures by presenting observers with short video sequences of complex crowd scenes. In both experiments, memory for the stopping position of the video was shifted forward, consistent with representational momentum. Interestingly, when the video sequences were played in reverse, the magnitude of this forward bias was larger. Taken together, the results of all four experiments suggest that even when presented with complex, continuous motion, the visual system may sometimes try to anticipate the outcome of our own and others' actions.  相似文献   

8.
Two experiments used both irrelevant speech and tones in order to assess the effect of manipulating the spatial location of irrelevant sound. Previous research in this area had produced inconclusive results (e.g., Colle, 1980). The current study demonstrated a novel finding, that sound presented to the left ear produces the greatest level of disruption. These results were explained in terms of hemispheric specialisation for processing of some supra-linguistic components in the unattended sound. Results also supported previous research by demonstrating that both forms of irrelevant sound disrupted performance on serial memory tasks (Bridges & Jones, 1996; Colle & Welsh, 1976; Jones, Alford, Bridges, Tremblay, & Macken, 1999; Jones, Miles, & Page, 1990).  相似文献   

9.
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of -7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions.  相似文献   

10.
11.
A study is reported in which the acuity of azimuth and elevation discrimination under monaural listening conditions was measured. Six subjects localised a sound source (white noise through a speaker) which varied in position over a range of elevations (-40 degrees to +40 degrees) and azimuths (0 degrees to 180 degrees), at 10 degrees intervals, on the left side of the head. Monaural listening conditions were established by the fitting of an ear defender and one earmuff to the right ear. The absolute and algebraic, azimuth and elevation errors were measured for all subjects at each position of the source. The results indicate that all subjects suffered a marked reduction of azimuth acuity under monaural conditions, although a coarse capacity to discriminate azimuth still remained. Considerable between-subject variability was observed. Front/back discrimination was retained, although it was slightly impaired compared to that observed under normal listening conditions. Elevation discrimination was, on the whole, quite good under monaural conditions. However, a comparison of the performance of these subjects under monaural conditions with that observed under normal listening conditions indicated that some reduction in elevation localisation acuity occurred in the frontal quadrants in the median plane and in the upper quadrants of more lateral source positions. The reduction in acuity seen in these regions is attributed to the loss of information from the pinna of the occluded ear rather than to the observed reduction in azimuth error. The results provide partial support for the binaural pinna disparity model.  相似文献   

12.
In two experiments college students were asked to provide situational frequency estimates of 10-s excerpts from rock songs. In both experiments familiarity of the musical selections heard one, two, three, or four times was varied. In Experiment 2 the nature of instructions given to subjects prior to presentation of the musical excerpts was also manipulated. Across both experiments subjects' estimates were less accurate for unfamiliar than for familiar rock music. In Experiment 2 instructions to remember frequency, as well as general memory instructions, resulted in better memory for presentation frequency than did instructions to "ignore" music while working on math problems. Memory for situational frequency was also related to knowledge of rock music as defined by subjects' ability to identify the titles and artists of the presented songs. The present pattern of results with popular music is viewed as similar to that obtained in experiments investigating memory for frequency of verbal stimuli. Although providing support for an automatic processing view of frequency encoding, the results also implicate meaningful elaboration of stimuli as an important determinant of memory for frequency of events.  相似文献   

13.
We describe a patient LS, profoundly deaf in both ears from birth, with underdeveloped superior temporal gyri. Without hearing aids, LS displays no ability to detect sounds below a fixed threshold of 60 dBs, which classifies him as clinically deaf. Under these no-hearing-aid conditions, when presented with a forced-choice paradigm in which he is asked to consciously respond, he is unable to make above-chance judgments about the presence or location of sounds. However, he is able to make above-chance judgments about the content of sounds presented to him under forced-choice conditions. We demonstrated that LS has faint sensations from auditory stimuli, but questionable awareness of auditory content. LS thus has a form of type-2 deaf hearing with respect to auditory content. As in the case of a subject with acquired deafness and deaf hearing reported on a previous occasion, LS’s condition of deaf hearing is akin in some respects to type-2 blindsight. As for the case of type 2 blindsight the case indicates that a form of conscious hearing can arise in the absence of a fully developed auditory cortex.  相似文献   

14.
Subjects examined crowded semirealistic layouts of toy objects, or photographs of these layouts, and then tried to identify added, moved or deleted items. The main study, involving 1st-, 3rd, 6th-graders, and adults, showed that although successful recognition of added items was superior to both recall of deleted items and recognition of moved ones, all three scores improved with age. In addition, false reports of “new” items decreased markedly in the older groups. The results argue against the widely held view that recognition memory undergoes little or no developmental improvement. No significant difference between real layouts and photographs appeared either in the main experiment or in replications involving shorter exposure (Experiment 2) or retarded subjects (Experiment 3).  相似文献   

15.
16.
In four experiments, we examined the degree to which imaging written words as spoken by a familiar talker differs from direct perception (hearing words spoken by that talker) and reading words (without imagery) on implicit and explicit tests. Subjects first performed a surface encoding task on spoken, imagined as spoken, or visually presented words, and then were given either an implicit test (perceptual identification or stem completion) or an explicit test (recognition or cued recall) involving auditorily presented words. Auditory presentation at study produced larger priming effects than did imaging or reading. Imaging and reading yielded priming effects of similar magnitude, whereas imaging produced lower performance than reading on the explicit test of cued recall. Voice changes between study and test weakened priming on the implicit tests, but did not affect performance on the explicit tests. Imagined voice changes affected priming only in the implicit task of stem completion. These findings show that the sensitivity of a memory test to perceptual information, either directly perceived or imagined, is an important dimension for dissociating incidental (implicit) and intentional (explicit) retrieval processes.  相似文献   

17.
梁毅  陈红  邱江  高笑  赵婷婷 《心理学报》2008,40(8):913-919
采用事件相关电位(ERPs)技术探讨了负面身体自我女性(大学生)对胖-瘦两类身体图片进行再认时的脑内时程动态变化。结果发现,相对于控制组(正常女大学生),胖负面身体自我女大学生在750~800ms的时间窗口内,“胖图”比“瘦图”诱发出一个更正的ERP波形,差异波的地形图显示该正成分在中前部有更强的激活。进一步对差异波作偶极子溯源分析,结果发现,该正成分主要起源于右侧枕叶附近。这似乎表明,右侧枕叶的激活与身体自我信息的出现有关,与负面身体自我图式的体验有关  相似文献   

18.
19.
The authors hypothesized that during a gap in a timed signal, the time accumulated during the pregap interval decays at a rate proportional to the perceived salience of the gap, influenced by sensory acuity and signal intensity. When timing visual signals, albino (Sprague-Dawley) rats, which have poor visual acuity, stopped timing irrespective of gap duration, whereas pigmented (Long-Evans) rats, which have good visual acuity, stopped timing for short gaps but reset timing for long gaps. Pigmented rats stopped timing during a gap in a low-intensity visual signal and reset after a gap in a high-intensity visual signal, suggesting that memory for time in the gap procedure varies with the perceived salience of the gap, possibly through an attentional mechanism.  相似文献   

20.
Infants respond preferentially to faces and face‐like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3‐ to 12‐months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants rapidly detected and oriented to faces in scenes even when they were not visually salient. Although a clear developmental improvement was observed in face detection and interest, all infants displayed sensitivity to the presence of a person in a scene, by displaying eye movements that differed quantifiably across a range of measures when viewing scenes that either did or did not contain a person. We argue that infant's face detection capabilities are ostensibly “better” with naturalistic stimuli and artificial array presentations used in previous studies have underestimated performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号