共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments studied perceptual comparisons with cues that vary in one of four ways (picture, sound, spoken word, or printed word) and with targets that are either pictures or environmental sounds. The basic question probed whether modality or differences in format were factors that would influence picture and sound perception. Also of interest were cue effect differences when targets are presented on either the right or left side. Students responded to a same-different reaction time task that entailed matching cue-target pairs to determine whether the successive stimulus events represented features drawn from the same basic item. Cue type influenced reaction times to pictures and environmental sounds, but the effects were qualified by response type and with picture targets by presentation side. These results provide some additional evidence of processing asymmetry when pictures are directed to either the right or left hemisphere, as well as for some asymmetries in cross-modality cuing. Implications of these findings for theories of multisensory processing and models of object recognition are discussed. 相似文献
2.
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound–Sound, Word–Sound, Sound–Word and Word–Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. 相似文献
3.
In four experiments, we examined the role of auditory transients and auditory short-term memory in perceiving changes in a complex auditory scene comprising multiple auditory objects. Participants were presented pairs of complex auditory scenes that were composed of a maximum of four animal calls delivered in free field; participants were instructed to decide whether the two scenes were the same or different (Experiments 1, 2, and 4). Changes to the second scene consisted of either the addition or the deletion of one animal call. Contrary to intuitive predictions based on results from the visual change blindness literature, substantial deafness to the change emerged without regard to whether the scenes were separated by 500 msec of masking white noise or by 500 msec of silence (Experiment 1). In fact, change deafness was not even modulated by having the two scenes presented contiguously (i.e., 0-msec interval) or separated by 500 msec of silence (Experiments 2 and 4). This result suggests that change-related auditory transients played little or no role in change detection in complex auditory scenes. Instead, the main determinant of auditory change perception (and auditory change deafness) appears to have been the capacity of auditory short-term memory (Experiments 3 and 4). Taken together, these findings indicate that the intuitive parallels between visual and auditory change perception should be reconsidered. 相似文献
4.
Andrew Hollingworth 《Visual cognition》2013,21(6):1003-1016
Memory for the positions of objects in natural scenes was investigated. Participants viewed an image of a real-world scene (preview scene), followed by a target object in isolation (target probe), followed by a blank screen with a mouse cursor. Participants estimated the position of the target using the mouse. Three conditions were compared. In the target present preview condition, the target object was present in the scene preview. In the target absent preview condition, the target object not present in the scene preview. In the no preview condition, no preview scene was displayed. Localization accuracy in the target present preview condition was reliably higher than that in the target absent preview condition, which was reliably higher than localization accuracy in the no preview condition. These data demonstrate that participants can remember both the spatial context of a scene and the specific positions of local objects. 相似文献
5.
Despite the complexity and diversity of natural scenes, humans are very fast and accurate at identifying basic-level scene categories. In this paper we develop a new technique (based on Bubbles, Gosselin & Schyns, 2001a; Schyns, Bonnar, & Gosselin, 2002) to determine some of the information requirements of basic-level scene categorizations. Using 2400 scenes from an established scene database (Oliva & Torralba, 2001), the algorithm randomly samples the Fourier coefficients of the phase spectrum. Sampled Fourier coefficients retain their original phase while the phase of nonsampled coefficients is replaced with that of white noise. Observers categorized the stimuli into 8 basic-level categories. The location of the sampled Fourier coefficients leading to correct categorizations was recorded per trial. Statistical analyses revealed the major scales and orientations of the phase spectrum that observers used to distinguish scene categories. 相似文献
6.
Wichmann FA Sharpe LT Gegenfurtner KR 《Journal of experimental psychology. Learning, memory, and cognition》2002,28(3):509-520
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5%-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework. 相似文献
7.
Sussman ES Bregman AS Wang WJ Khan FJ 《Cognitive, affective & behavioral neuroscience》2005,5(1):93-110
In three experiments, we addressed the issue of attention effects on unattended sound processing when one auditory stream
is selected from three potential streams, creating a simple model of the cocktail party situation. We recorded event-related
brain potentials (ERPs) to determine the way in which unattended, task-irrelevant sounds were stored in auditory memory (i.e.,
as one integrated stream or as two distinct streams). Subjects were instructed to ignore all the sounds and attend to a visual
task or to selectively attend to a subset of the sounds and perform a task with the sounds (Experiments 1 and 2). A third
(behavioral) experiment was conducted to test whether global pattern violations (used in Experiments 1 and 2) were perceptible
when the sounds were segregated. We found that the mismatch negativity ERP component, an index of auditory change detection,
was evoked by infrequent pattern violations occurring in the unattended sounds when all the sounds were ignored, but not when
attention was focused on a subset of the sounds. The results demonstrate that multiple unattended sound streams can segregate
by frequency range but that selectively attending to a subset of the sounds can modify the extent to which the unattended
sounds are processed. These results are consistent with models in animal and human studies showing that attentional control
can limit the processing of unattended input in favor of attended sensory inputs, thereby facilitating the ability to achieve
behavioral goals. 相似文献
8.
We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing. 相似文献
9.
This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which target and surrogate coexist in the world; metaphorical, in which target and surrogate have similar appearance or function, and random) were compared with one another and with direct relations on measures including associative strength ratings, amount of exposure required for learning, and response times for recognizing icons. Findings suggest that performance is best with direct relations, worst with random relations, and that ecological and metaphorical relations involve distinct types of association but do not differ in learnability. 相似文献
10.
Karen G. Foreit 《Journal of experimental child psychology》1977,24(3):461-475
Spoken serial recall by second-grade children of aurally presented lists of digits, synthetic stop consonants, and synthetic vowels showed a significant suffix effect (selective debilitation of recall at the final position under the stimulus suffix condition) only for the lists of digits and not for either consonants or vowels. Making the synthetic syllables more distinctive by simultaneously covarying the consonant and vowel failed to produce a suffix effect under a strict scoring criterion which required both consonant and vowel to be recalled correctly; however, when subjects were given credit for partially correct answers the suffix effect emerged. Adults given the redundant consonant-vowel syllables showed a significant suffix effect with the strict scoring criterion. However, when consonants and vowels varied orthogonally, the adults' performance showed the suffix effect only under the lenient scoring criterion. An argument is made for equivalence of basic memorial processing between children and adults, the difference being in the number of features needed to disambiguate the target items and in the ability to integrate these features to exploit interstimulus redundancy. 相似文献
11.
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work. 相似文献
12.
13.
Hollingworth A 《Journal of experimental psychology. Human perception and performance》2007,33(1):31-47
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects. 相似文献
14.
There is much debate about how detection, categorization, and within-category identification relate to one another during object recognition. Whether these tasks rely on partially shared perceptual mechanisms may be determined by testing whether training on one of these tasks facilitates performance on another. In the present study we asked whether expertise in discriminating objects improves the detection of these objects in naturalistic scenes. Self-proclaimed car experts (N = 34) performed a car discrimination task to establish their level of expertise, followed by a visual search task where they were asked to detect cars and people in hundreds of photographs of natural scenes. Results revealed that expertise in discriminating cars was strongly correlated with car detection accuracy. This effect was specific to objects of expertise, as there was no influence of car expertise on person detection. These results indicate a close link between object discrimination and object detection performance, which we interpret as reflecting partially shared perceptual mechanisms and neural representations underlying these tasks: the increased sensitivity of the visual system for objects of expertise – as a result of extensive discrimination training – may benefit both the discrimination and the detection of these objects. Alternative interpretations are also discussed. 相似文献
15.
In this study, by manipulating perceptual load, we investigated whether socially anxious people process task-irrelevant, non-emotional, natural scenes. When attention was directed to letters and perceptual load was low, task-irrelevant natural scenes were processed, as evidenced by repetition priming effects, in both high and low socially anxious people. In the high perceptual load condition, repetition-priming effects decreased in participants with low social anxiety, but not in those with high social anxiety. The results were the same when attention was directed to pictures of animals: even in the high perceptual load condition, high socially anxious participants processed task-irrelevant natural scenes, as evidenced by flanker effects. However, when attention was directed to pictures of people, task-irrelevant natural scenes were not processed by participants in either anxiety group, regardless of perceptual load. These results suggest that high socially anxious individuals could not inhibit task-irrelevant natural scenes under conditions of high perceptual load, except when attention was focused on people. 相似文献
16.
Subjects were provided with outline maps that were incomplete in several details. Brief, simultaneous, visual and auditory instructions were given for completing some of the missing details. Certain items could be completed on the basis of direct information contained in one or other of the sensory modalities. Others, however, could be completed only because of their relation to details capable of location by direct instruction. Information important for the completion of map details was distributed randomly among short passages of unconnected words. All relevant visual and aural clues were presented simultaneously in every case. Opportunities for alternations of attention were curtailed.
Thirty-six subjects were randomly assigned to three experimental conditions, and to two groups that were given different instructions. One group was told that relevant information would always appear simultaneously, while the other group was not allowed this information.
The number of successfully located simultaneous pairs of items presented for direct location was found to be no greater than could be expected by chance. The total number of correctly located items was less than 50 per cent of the possible items. There was no difference in the number of correctly located simultaneous pairs of items between the “instructed” and the “uninstructed” groups. The “uninstructed” group did not learn in the course of the experiment that all relevant material was presented simultaneously. Significantly more correct completions were made with the visual material than with the auditory. It is concluded that successful division of attention did not occur. 相似文献
Thirty-six subjects were randomly assigned to three experimental conditions, and to two groups that were given different instructions. One group was told that relevant information would always appear simultaneously, while the other group was not allowed this information.
The number of successfully located simultaneous pairs of items presented for direct location was found to be no greater than could be expected by chance. The total number of correctly located items was less than 50 per cent of the possible items. There was no difference in the number of correctly located simultaneous pairs of items between the “instructed” and the “uninstructed” groups. The “uninstructed” group did not learn in the course of the experiment that all relevant material was presented simultaneously. Significantly more correct completions were made with the visual material than with the auditory. It is concluded that successful division of attention did not occur. 相似文献
17.
P J Kraemer W A Roberts 《Journal of experimental psychology. Animal behavior processes》1985,11(2):137-151
A series of divided-attention experiments in which matching to the visual or auditory component of a tone-light compound was compared with matching to visual or auditory elements as sample stimuli were carried out. In 0-s delayed and simultaneous matching procedures, pigeons were able to match visual signals equally well when presented alone or with a tone; tones were matched at a substantially lower level of accuracy when presented with light signals than when presented as elements. In further experiments, it was demonstrated that the interfering effect of a signal light on tone matching was not related to the signaling value of the light, and that the prior presentation of light proactively interfered with auditory delayed matching. These findings indicate a divided attention process in which auditory processing is strongly inhibited in the presence of visual signals. 相似文献
18.
James Craig Bartlett 《Memory & cognition》1977,5(4):404-414
Two experiments investigated the role of verbalization in memory for environmental sounds. Experiment i extended earlier research (Bower & Holyoak, 1973) showing that sound recognition is highly dependent upon consistent verbal interpretation at input and test. While such a finding implies an important role for verbalization, Experiment 2 suggested that verbalization is not the only efficacious strategy for encoding environmental sounds. Recognition after presentation of sounds was shown to differ qualitatively from recognition after presentation of sounds accompanied with interpretative verbal labels and from recognition after presentation of verbal labels alone. The results also suggest that encoding physical information about sounds is of greater importance for sound recognition than for verbal free recall, and that verbalization is of greater importance for free recall than for recognition. Several alternative frameworks for the results are presented, and separate retrieval and discrimination processes in recognition are proposed. 相似文献
19.
20.
On the detection of signals embedded in natural scenes 总被引:1,自引:0,他引:1