首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Strybel TZ  Vatakis A 《Perception》2004,33(9):1033-1048
Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.  相似文献   

2.
Turatto M  Mazza V  Umiltà C 《Cognition》2005,96(2):B55-B64
According to the object-based view, visual attention can be deployed to "objects" or perceptual units, regardless of spatial locations. Recently, however, the notion of object has also been extended to the auditory domain, with some authors suggesting possible interactions between visual and auditory objects. Here we show that task-irrelevant auditory objects may affect the deployment of visual attention, providing evidence that crossmodal links can also occur at an object-based level. Hence, in addition to the well documented control of visual objects over what we hear, our findings demonstrate that, in some cases, auditory objects can affect visual processing.  相似文献   

3.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

4.
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test the extent to which scene and object selective areas are sensitive to perceived distance information independently from their category-selectivity and retinotopic location. We conducted two studies that used a distance illusion (i.e., the Ponzo lines) and showed that scene regions (the parahippocampal place area, PPA, and transverse occipital sulcus, TOS) are biased toward perceived distal stimuli, whereas the lateral occipital (LO) object region is biased toward perceived proximal stimuli. These results suggest that the ventral visual cortex plays a role in representing distance information, extending recent findings on the sensitivity of these regions to location information. More broadly, our findings imply that distance information is inherent to object recognition.  相似文献   

5.
Matthews H  Hill H  Palmisano S 《Perception》2012,41(2):168-174
Evidence suggests that experiencing the hollow-face illusion involves perceptual reversal of the binocular disparities associated with the face even though the rest of the scene appears unchanged. This suggests stereoscopic processing of object shape may be independent of scene-based processing of the layout of objects in depth. We investigated the effects of global scene-based and local object-based disparity on the compellingness of the perceived convexity of the face. We took stereoscopic photographs of people in scenes, and independently reversed the binocular disparities associated with the head and scene. Participants rated perceived convexity of a natural disparity ("convex") or reversed disparity ("concave") face shown either in its original context with reversed or natural disparities or against a black background. Faces with natural disparity were rated as more convincingly convex independent of the background, showing that the local disparities can affect perceived convexity independent of disparities across the rest of the image. However, the apparent convexity of the faces was also greater in natural disparity scenes compared to either a reversed disparity scene or a zero disparity black background. This independent effect of natural scene disparity suggests that the 'solidity' associated with natural scene disparities spread to enhance the perceived convexity of the face itself. Together, these findings suggest that global and local disparity exert independent and additive effects upon the perceived convexity of the face.  相似文献   

6.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

7.
Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research strengthens the link between cognitive tunneling and object-based attention by showing that (a) elements of a visual display that share a common fate are grouped into a perceptual object and that this grouping is sufficient to sustain object-based attention, (b) object-based attention and thereby cognitive tunneling is affected by strategic focusing of attention, and (c) object-based attention is primarily inhibitory in nature.  相似文献   

8.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

9.
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory–visual interactions, which rapidly increase the denoted target object’s salience. This would apply, in particular, to complex visual scenes.  相似文献   

10.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.  相似文献   

11.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

12.
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.  相似文献   

13.
In four experiments, we examined the role of auditory transients and auditory short-term memory in perceiving changes in a complex auditory scene comprising multiple auditory objects. Participants were presented pairs of complex auditory scenes that were composed of a maximum of four animal calls delivered in free field; participants were instructed to decide whether the two scenes were the same or different (Experiments 1, 2, and 4). Changes to the second scene consisted of either the addition or the deletion of one animal call. Contrary to intuitive predictions based on results from the visual change blindness literature, substantial deafness to the change emerged without regard to whether the scenes were separated by 500 msec of masking white noise or by 500 msec of silence (Experiment 1). In fact, change deafness was not even modulated by having the two scenes presented contiguously (i.e., 0-msec interval) or separated by 500 msec of silence (Experiments 2 and 4). This result suggests that change-related auditory transients played little or no role in change detection in complex auditory scenes. Instead, the main determinant of auditory change perception (and auditory change deafness) appears to have been the capacity of auditory short-term memory (Experiments 3 and 4). Taken together, these findings indicate that the intuitive parallels between visual and auditory change perception should be reconsidered.  相似文献   

14.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation.  相似文献   

15.
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitate visual searches (the contextual cuing effect). Whereas previous studies have shown a cuing effect in the visual domain, the present study examined whether a contextual cuing effect could develop from association between auditory events and visual target locations (Experiments 1 and 2). In the training phase, participants searched for a T among Ls, preceded by 2 sec of auditory stimulus. The target location could be predicted from the preceding auditory stimulus. In the test phase, the auditory-visual association pairings were disrupted. The results revealed that a contextual cuing effect occurs by auditory-visual association. Participants did not notice the auditory-visual association. Experiment 3 explored a boundary condition for the auditory-visual contextual cuing effect. These results suggest that visual attention can be guided implicitly by crossmodal association, and they extend the idea that the visual system is sensitive to all kinds of statistical consistency.  相似文献   

16.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

17.
It has been suggested that certain prefrontal areas contribute to a neural circuit that mediates visual object memory. Using a successive go/no-go visual scene discrimination task, object-based long-term memory was assessed in two rodent prefrontal regions. Rewarded trials consisted of a standard scene of four toy objects placed over baited food wells. The objects and their locations composing the standard scene remained constant for the duration of the study. Trials in which one of the standard scene objects was replaced with a novel object were not rewarded. Following the establishment of a significant difference between latency to approach the rewarded standard scene compared to latency to approach non-rewarded scenes, quinolinic acid or control vehicle was infused into either the prelimbic and infralimbic cortices or the anterior cingulate cortex. Following a 1 week recovery period, subjects were retested. Animals with prelimbic/infralimbic cortex lesions displayed a profound and sustained deficit, whereas, animals with anterior cingulate cortex lesions showed a slight initial impairment but eventually recovered. Both lesion groups acquired a simple single object discrimination task as quickly as controls indicating that the deficits on the original scene discrimination task were not due to motivational, response inhibition, or perceptual problems.  相似文献   

18.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position.  相似文献   

19.
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music–dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant’s own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant’s own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one’s own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号