首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained by probability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.  相似文献   

2.
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described.  相似文献   

3.
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.  相似文献   

4.
Our perception of the visual world remains stable and continuous despite the disruptions caused by retinal image displacements during saccadic eye movements. The problem of visual stability is closely related to the question of whether information is transferred across such eye movements-and if so, what sort of information is transferred. We report experiments carried out to investigate how presaccadic signals at the location of the saccade goal influence the visibility of postsaccadic test signals presented at the fovea. The signals were Landolt rings of differ-ent orientations. If the orientations of pre- and postsaccadic Landolt rings were different, the thresholds of the test signals were elevated by about 20%–25% relative to those at the static control condition. When the orientations were identical, no such elevation occurred. This selective threshold elevation effect proved to be a phenomenon different from ordinary saccadic suppression, although it was closely related to the execution of the saccadic eye movement. The consequences for visual stability are discussed.  相似文献   

5.
Two experiments studied how information about the nontarget items in a visual search task is used for the control of the search. The first experiment used the detection of “hurdle” stimuli to demonstrate that efficient memory representations of the context items can be established within each particular trial. This finding is explained by a model for the short-term integration of context information. The second experiment which varied the complexity of the local but not the global context provided some information about the nature of the integration operations involved. In its final version the model postulates two stages of processing with independent mechanisms of integration. Spatial integration at the first stage deletes repetitions within small samples. Temporal integration at the second stage stores and primes the memory representations of the context items over larger intervals. It is assumed that transient temporal integration within trials is mediated by the same mechanism that underlies permanent temporal integration between trials.  相似文献   

6.
Constructing useful representations of our visual environment requires the ability to selectively pay attention to particular locations at specific moments. Whilst there has been much investigation on the influence of selective attention on spatial discrimination, less is known about its influence on temporal discrimination. In particular, little is known about how endogenous attention influences two fundamental and opposing temporal processes: segregation – the parsing of the visual scene over time into separate features, and integration – the binding together of related elements. In four experiments, we tested how endogenous cueing to a location influences each of these opposing processes. Results demonstrate a strong cueing effect on both segregation and integration. These results are consistent with the hypothesis that endogenous attention can influence both of these opposing processes in a flexible manner. The finding has implications for arbitrating between accounts of the multiple modulatory mechanisms comprising selective attention.  相似文献   

7.
The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.  相似文献   

8.
Preterm children are at risk for a number of visual impairments which can be important for a range of other more complex visuocognitive tasks reliant on visual information. Despite the relatively high incidence of visual impairments in this group there are no good predictors that would allow early identification of those at risk for adverse outcomes. Several lines of evidence suggest that docosahexaenoic acid (DHA) supplementation for preterm infants may improve outcomes in this area. For example, diets deficient in the long-chain polyunsaturated fatty acid DHA have been shown to reduce its concentration in the cerebral cortex and retina, which interferes with physiological processes important for cognition and visual functioning. Further, various studies with pregnant and lactating women, as well as formula-fed infants, have demonstrated a general trend that supplementation with dietary DHA is associated with better childhood outcomes on tests of visual and cognitive development over the first year of life. However, research to date has several methodological limitations, including concentrations of DHA supplementation that have been too low to emulate the in utero accretion of DHA, using single measures of visual acuity to make generalised assumptions about the entire visual system, and little attempt to match what we know about inadequate DHA and structural ramifications with how specific functions may be affected. The objective of this review is to consider the role of DHA in the context of visual processing with a specific emphasis on preterm infants and to illustrate how future research may benefit from marrying what we know about structural consequences to inadequate DHA with functional outcomes that likely have far-reaching ramifications. Factors worth considering for clinical neuropsychological evaluation are also discussed.  相似文献   

9.
This study examined the multisensory integration of visual and auditory motion information using a methodology designed to single out perceptual integration processes from post-perceptual influences. We assessed the threshold stimulus onset asynchrony (SOA) at which the relative directions (same vs. different) of simultaneously presented visual and auditory apparent motion streams could no longer be discriminated (Experiment 1). This threshold was higher than the upper threshold for direction discrimination (left vs. right) of each individual modality when presented in isolation (Experiment 2). The poorer performance observed in bimodal displays was interpreted as a consequence of automatic multisensory integration of motion information. Experiment 3 supported this interpretation by ruling out task differences as the explanation for the higher threshold in Experiment 1. Together these data provide empirical support for the view that multisensory integration of motion signals can occur at a perceptual level.  相似文献   

10.
Global precedence in attention and decision   总被引:2,自引:0,他引:2  
This article examines the order in which people recognize and respond to different levels of structure within a visual display. Certain previous experiments have been interpreted as showing that information about global characteristics of an array is extracted by the visual system before information about local characteristics. Results from a task in which the observer must attend to both global and local information demonstrate that local information has a large influence on reaction time even when information at the global level is sufficient to determine the response. This finding implies that local information becomes available to decision processes with a time course similar to that of global information. Effects previously attributed to the order in which different levels of structure are recognized may result from differential ease of directing attention to these different levels and selecting responses based on them.  相似文献   

11.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker’s voice was dubbed onto a videotape containing a female talker’s face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker’s voice was dubbed onto a male face and a female talker’s voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effectwas not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

12.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker's voice was dubbed onto a videotape containing a female talker's face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker's voice was dubbed onto a male face and a female talker's voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effect was not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

13.
The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states and that these resonant states trigger learning of sensory and cognitive representations. The models which summarize these concepts are therefore called Adaptive Resonance Theory, or ART, models. Psychophysical and neurobiological data in support of ART are presented from early vision, visual object recognition, auditory streaming, variable-rate speech perception, somatosensory perception, and cognitive-emotional interactions, among others. It is noted that ART mechanisms seem to be operative at all levels of the visual system, and it is proposed how these mechanisms are realized by known laminar circuits of visual cortex. It is predicted that the same circuit realization of ART mechanisms will be found in the laminar circuits of all sensory and cognitive neocortex. Concepts and data are summarized concerning how some visual percepts may be visibly, or modally, perceived, whereas amodal percepts may be consciously recognized even though they are perceptually invisible. It is also suggested that sensory and cognitive processing in the What processing stream of the brain obey top-down matching and learning laws that are often complementary to those used for spatial and motor processing in the brain's Where processing stream. This enables our sensory and cognitive representations to maintain their stability as we learn more about the world, while allowing spatial and motor representations to forget learned maps and gains that are no longer appropriate as our bodies develop and grow from infanthood to adulthood. Procedural memories are proposed to be unconscious because the inhibitory matching process that supports these spatial and motor processes cannot lead to resonance.  相似文献   

14.
Correctly integrating sensory information across different modalities is a vital task, yet there are illusions which cause the incorrect localization of multisensory stimuli. A common example of these phenomena is the "ventriloquism effect". In this illusion, the localization of auditory signals is biased by the presence of visual stimuli. For instance, when a light and sound are simultaneously presented, observers may erroneously locate the sound closer to the light than its actual position. While this phenomenon has been studied extensively in azimuth at a single depth, little is known about the interactions of stimuli at different depth planes. In the current experiment, virtual acoustics and stereo-image displays were used to test the integration of visual and auditory signals across azimuth and depth. The results suggest that greater variability in the localization of sounds in depth may lead to a greater bias from visual stimuli in depth than in azimuth. These results offer interesting implications for understanding multisensory integration.  相似文献   

15.
Cyclists are considered to be amongst the most vulnerable road users and the number of cyclists involved in crashes is increasing. One possibility to improve bicycle safety is the implementation of assistance systems, for instance by providing the information needed to avoid critical situations. However, it is not known how and what kind of signals can reliably be transmitted to cyclists, in particular as warnings. This study has the objective to investigate which signal type dependent of the modality and the route type can be perceived during the cycling task. Therefore, we conducted a semi-naturalistic cycling study with 56 participants where a 10 km long, pre-defined route was individually cycled while 36 signals (visual, auditory and vibro-tactile) were transmitted. The participants signalled the perception of a signal by pressing a button. Response rates differed significantly between signal modalities. While auditory signals performed best closely followed by vibro-tactile signals, visual signals were frequently missed. The route type had an effect on the perception of the signals. The influence of the route segments with haptic interference was not expected to be this large on the perception of vibro-tactile signals. The obtained results indicate how and in which situations the different modalities are suited to transmit information to cyclists.  相似文献   

16.
Researchers often conduct visual world studies to investigate how listeners integrate linguistic information with prior context. Such studies are likely to generate anticipatory baseline effects (ABEs), differences in listeners' expectations about what a speaker might mention that exist before a critical speech stimulus is presented. ABEs show that listeners have attended to and accessed prior contextual information in time to influence the processing of the critical speech stimulus. However, further evidence is required to show that the information actually did influence subsequent processing. ABEs can compromise the validity of inferences about information integration if they are not appropriately controlled. We discuss four solutions: statistical estimation, experimental control, elimination of “on-target” trials, and neutral gaze. An experiment compares the performance of these solutions, and suggests that the elimination of on-target trials introduces bias in the direction of ABEs, due to the statistical phenomenon of regression toward the mean. We conclude that statistical estimation, possibly coupled with experimental control, offers the most valid and least biased solution.  相似文献   

17.
To determine how the visual system represents information about change in target direction, we studied the detection of such change under conditions of varying stimulus certainty. Target direction was either held constant over trials or was allowed to vary randomly. When target direction was constant the observer could be certain about that stimulus characteristic; randomizing the target direction rendered the observer uncertain. We measured response times (RTs) to changes in target direction following initial trajectories of varying time and distance. In different conditions, the observer was uncertain about either the direction of the initial trajectory, or the direction of change or both. With brief initial trajectories in random directions, uncertainty about initial direction elevated RTs by 50 ms or more. When the initial trajectories were at least 500 ms, this directional uncertainty ceased to affect RTs; then, only uncertainty about the direction of change affected RTs. We discuss the implications of these results for (i) schemes by which the visual system might code directional change; (ii) the visual integration time for directional information; and (iii) adaptational processes in motion perception.  相似文献   

18.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.  相似文献   

19.
Linking form and motion in the primate brain   总被引:1,自引:0,他引:1  
Understanding dynamic events entails the integration of information about form and motion that is crucial for fast and successful interactions in complex environments. A striking example of our sensitivity to dynamic information is our ability to recognize animate figures by the way they move and infer motion from still images. Accumulating evidence for form and motion interactions contrasts with the traditional dissociation between shape and motion-related processes in the ventral and dorsal visual pathways. By combining findings from physiology and brain imaging it can be demonstrated that the primate brain converts information about spatiotemporal sequences into meaningful actions through interactions between early and higher visual areas processing form and motion and frontal-parietal circuits involved in the understanding of actions.  相似文献   

20.
In the current study, we examined how short-term memory for location–identity feature bindings is influenced by subsequent cognitive and perceptual processing demands. Previous work has shown that memory performance for feature bindings can be disrupted by the presentation of subsequent visual information, particularly when this information is similar to that held in memory. The present study demonstrates that memory performance for feature bindings can be profoundly disrupted by also requiring a response to visual information presented subsequent to the visual memory array. Across five experiments, memory for a location–identity binding was substantially impaired following a localization response to a following item that matched the location but mismatched the identity of the memory target. The results point to an important role for action in the episodic integration processes that control short-term visual memory performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号