首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computational theories of vision typically rely on the analysis of two aspects of human visual function: (1) object and shape recognition (2) co-calibration of sensory measurements. Both these approaches are usually based on an inverse-optics model, where visual perception is viewed as a process of inference from a 2D retinal projection to a 3D percept within a Euclidean space schema. This paradigm has had great success in certain areas of vision science, but has been relatively less successful in understanding perceptual representation, namely, the nature of the perceptual encoding. One of the drawbacks of inverse-optics approaches has been the difficulty in defining the constraints needed to make the inference computationally tractable (e.g. regularity assumptions, Bayesian priors, etc.). These constraints, thought to be learned assumptions about the nature of the physical and optical structures of the external world, have to be incorporated into any workable computational model in the inverse-optics paradigm. But inference models that employ an inverse optics plus structural assumptions approach inevitably result in a naïve realist theory of perceptual representation. Another drawback of inference models for theories of perceptual representation is their inability to explain central features of the visual experience. The one most evident in the process and visual understanding of design is the fact that some visual configurations appear, often spontaneously, as perceptually more coherent than others. The epistemological consequences of inferential approaches to vision indicate that they fail to capture enduring aspects of our visual experience. Therefore they may not be suited to a theory of perceptual representation, or useful for an understanding of the role of perception in the design process and product.  相似文献   

2.
In vision, it is well established that the perceptual load of a relevant task determines the extent to which irrelevant distractors are processed. Much less research has addressed the effects of perceptual load within hearing. Here, we provide an extensive test using two different perceptual load manipulations, measuring distractor processing through response competition and awareness report. Across four experiments, we consistently failed to find support for the role of perceptual load in auditory selective attention. We therefore propose that the auditory system – although able to selectively focus processing on a relevant stream of sounds – is likely to have surplus capacity to process auditory information from other streams, regardless of the perceptual load in the attended stream. This accords well with the notion of the auditory modality acting as an ‘early-warning’ system as detection of changes in the auditory scene is crucial even when the perceptual demands of the relevant task are high.  相似文献   

3.
Time course of perceptual grouping by color   总被引:1,自引:0,他引:1  
Does perceptual grouping operate early or late in visual processing? One position is that the elements in perceptual layouts are grouped early in vision, by properties of the retinal image, before perceptual constancies have been determined. A second position is that perceptual grouping operates on a postconstancy representation, one that is available only after stereoscopic depth perception, lightness constancy, and amodal completion have occurred. The present experiments indicate that grouping can operate on both a preconstancy representation and a postconstancy representation. Perceptual grouping was based on retinal color similarity at short exposure durations and based on surface color similarity at long durations. These results permit an integration of the preconstancy and postconstancy positions with regard to grouping by color.  相似文献   

4.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

5.
A new theory of mind–body interaction in healing is proposed based on considerations from the field of perception. It is suggested that the combined effect of visual imagery and mindful meditation on physical healing is simply another example of cross-modal adaptation in perception, much like adaptation to prism-displaced vision. It is argued that psychological interventions produce a conflict between the perceptual modalities of the immune system and vision (or touch), which leads to change in the immune system in order to realign the modalities. It is argued that mind–body interactions do not exist because of higher-order cognitive thoughts or beliefs influencing the body, but instead result from ordinary interactions between lower-level perceptual modalities that function to detect when sensory systems have made an error. The theory helps explain why certain illnesses may be more amenable to mind–body interaction, such as autoimmune conditions in which a sensory system (the immune system) has made an error. It also renders sensible erroneous changes, such as those brought about by “faith healers,” as conflicts between modalities that are resolved in favor of the wrong modality. The present view provides one of very few psychological theories of how guided imagery and mindfulness meditation bring about positive physical change. Also discussed are issues of self versus non-self, pain, cancer, body schema, attention, consciousness, and, importantly, developing the concept that the immune system is a rightful perceptual modality. Recognizing mind–body healing as perceptual cross-modal adaptation implies that a century of cross-modal perception research is applicable to the immune system.  相似文献   

6.
The color of odors.   总被引:1,自引:0,他引:1  
The interaction between the vision of colors and odor determination is investigated through lexical analysis of experts' wine tasting comments. The analysis shows that the odors of a wine are, for the most part, represented by objects that have the color of the wine. The assumption of the existence of a perceptual illusion between odor and color is confirmed by a psychophysical experiment. A white wine artificially colored red with an odorless dye was olfactory described as a red wine by a panel of 54 tasters. Hence, because of the visual information, the tasters discounted the olfactory information. Together with recent psychophysical and neuroimaging data, our results suggest that the above perceptual illusion occurs during the verbalization phase of odor determination.  相似文献   

7.
Two experiments evaluated change in the perception of an environmental property (object length) in each of 3 perceptual modalities (vision, audition, and haptics) when perceivers were provided with the opportunity to experience the same environmental property by means of an additional perceptual modality (e.g., haptics followed by vision, vision followed by audition, or audition followed by haptics). Experiment 1 found that (a) posttest improvements in perceptual consistency occurred in all 3 perceptual modalities, regardless of whether practice included experience in an additional perceptual modality and (b) posttest improvements in perceptual accuracy occurred in haptics and audition but only when practice included experience in an additional perceptual modality. Experiment 2 found that learning curves in each perceptual modality could be accommodated by a single function in which auditory perceptual learning occurred over short time scales, haptic perceptual learning occurred over middle time scales, and visual perceptual learning occurred over long time scales. Analysis of trial-to-trial variability revealed patterns of long-term correlations in all perceptual modalities regardless of whether practice included experience in an additional perceptual modality.  相似文献   

8.
Humans gain a wide range of knowledge through interacting with the environment. Each aspect of our perceptual experiences offers a unique source of information about the world—colours are seen, sounds heard and textures felt. Understanding how perceptual input provides a basis for knowledge is thus central to understanding one's own and others' epistemic states. Developmental research suggests that 5-year-olds have an immature understanding of knowledge sources and that they overestimate the knowledge to be gained from looking. Without evidence from adults, it is not clear whether the mature reasoning system outgrows this overestimation. The current study is the first to investigate whether an overestimation of the knowledge to be gained from vision occurs in adults. Novel response time paradigms were adapted from developmental studies. In two experiments, participants judged whether an object or feature could be identified by performing a specific action. Adult participants found it disproportionately easy to accept looking as a proposed action when it was informative, and difficult to reject looking when it was not informative. This suggests that adults, like children, overestimate the informativeness of vision. The origin of this overestimation and the implications that the current findings bear on the interpretation of children's overestimation are discussed.  相似文献   

9.
Rationalizing the perceptual effects of spectral stimuli has been a major challenge in vision science for at least the last 200 years. Here we review evidence that this otherwise puzzling body of phenomenology is generated by an empirical strategy of perception in which the color an observer sees is entirely determined by the probability distribution of the possible sources of the stimulus. The rationale for this strategy in color vision, as in other visual perceptual domains, is the inherent ambiguity of the real-world origins of any spectral stimulus.  相似文献   

10.
In formulating a theory of perception that does justice to the embodied and enactive nature of perceptual experience, proprioception can play a valuable role. Since proprioception is necessarily embodied, and since proprioceptive experience is particularly integrated with one’s bodily actions, it seems clear that proprioception, in addition to, e.g., vision or audition, can provide us with valuable insights into the role of an agent’s corporal skills and capacities in constituting or structuring perceptual experience. However, if we are going to have the opportunity to argue from analogy with proprioception to vision, audition, touch, taste, or smell, then it is necessary to eschew any doubts about the legitimacy of proprioception’s inclusion into the category of perceptual modalities. To this end, in this article, I (1) respond to two arguments that Shaun Gallagher (2003) presents in “Bodily self-awareness and objectperception” against proprioception’s ability to meet the criteria of object perception, (2) present a diagnosis of Gallagher’s position by locating a misunderstanding in the distinction between proprioceptive information and proprioceptive awareness, and (3) show that treating proprioception as a perceptual modality allows us to account for the interaction of proprioception with the other sensory modalities, to apply the lessons we learn from proprioception to the other sensory modalities, and to account for proprioceptive learning. Finally, (4) I examine Sydney Shoemaker’s (1994) identification constraint and suggest that a full-fledged notion of object-hood is unnecessary to ground a theory of perception.  相似文献   

11.
Beyond perceiving the features of individual objects, we also have the intriguing ability to efficiently perceive average values of collections of objects across various dimensions. Over what features can perceptual averaging occur? Work to date has been limited to visual properties, but perceptual experience is intrinsically multimodal. In an initial exploration of how this process operates in multimodal environments, we explored statistical summarizing in audition (averaging pitch from a sequence of tones) and vision (averaging size from a sequence of discs), and their interaction. We observed two primary results. First, not only was auditory averaging robust, but if anything, it was more accurate than visual averaging in the present study. Second, when uncorrelated visual and auditory information were simultaneously present, observers showed little cost for averaging in either modality when they did not know until the end of each trial which average they had to report. These results illustrate that perceptual averaging can span different sensory modalities, and they also illustrate how vision and audition can both cooperate and compete for resources.  相似文献   

12.
Under specified conditions a pair of simple shapes are matched by a subject when almost all supplementary textural and space cues of depth vision have been removed. Under such conditions it is found that shape constancy is no longer present. However, the effect of regular rotatory motion of the shape is sufficient to restore constancy in the continued absence of other cues. Degree of perceptual constancy appears to be correlated with rate of change of shape. It is suggested that an explanation of the phenomenon is to be sought along the lines of Michotte's concept of “object creation” rather than in terms of gradient variables.  相似文献   

13.
The relationship between luminance (i.e., the photometric intensity of light) and its perception (i.e., sensations of lightness or brightness) has long been a puzzle. In addition to the mystery of why these perceptual qualities do not scale with luminance in any simple way, "illusions" such as simultaneous brightness contrast, Mach bands, Craik-O'Brien-Cornsweet edge effects, and the Chubb-Sperling-Solomon illusion have all generated much interest but no generally accepted explanation. The authors review evidence that the full range of this perceptual phenomenology can be rationalized in terms of an empirical theory of vision. The implication of these observations is that perceptions of lightness and brightness are generated according to the probability distributions of the possible sources of luminance values in stimuli that are inevitably ambiguous.  相似文献   

14.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

15.
Pylyshyn Z 《The Behavioral and brain sciences》1999,22(3):341-65; discussion 366-423
Although the study of visual perception has made more progress in the past 40 years than any other area of cognitive science, there remain major disagreements as to how closely vision is tied to cognition. This target article sets out some of the arguments for both sides (arguments from computer vision, neuroscience, psychophysics, perceptual learning, and other areas of vision science) and defends the position that an important part of visual perception, corresponding to what some people have called early vision, is prohibited from accessing relevant expectations, knowledge, and utilities in determining the function it computes--in other words, it is cognitively impenetrable. That part of vision is complex and involves top-down interactions that are internal to the early vision system. Its function is to provide a structured representation of the 3-D surfaces of objects sufficient to serve as an index into memory, with somewhat different outputs being made available to other systems such as those dealing with motor control. The paper also addresses certain conceptual and methodological issues raised by this claim, such as whether signal detection theory and event-related potentials can be used to assess cognitive penetration of vision. A distinction is made among several stages in visual processing, including, in addition to the inflexible early-vision stage, a pre-perceptual attention-allocation stage and a post-perceptual evaluation, selection, and inference stage, which accesses long-term memory. These two stages provide the primary ways in which cognition can affect the outcome of visual perception. The paper discusses arguments from computer vision and psychology showing that vision is "intelligent" and involves elements of "problem solving." The cases of apparently intelligent interpretation sometimes cited in support of this claim do not show cognitive penetration; rather, they show that certain natural constraints on interpretation, concerned primarily with optical and geometrical properties of the world, have been compiled into the visual system. The paper also examines a number of examples where instructions and "hints" are alleged to affect what is seen. In each case it is concluded that the evidence is more readily assimilated to the view that when cognitive effects are found, they have a locus outside early vision, in such processes as the allocation of focal attention and the identification of the stimulus.  相似文献   

16.
Visual stimuli are multidimensional. One important perceptual problem is to determine how the dimensions are combined. One important aspect of dimensional combination is whether the dimensions are perceptually independent or perceptually correlated. A new task is presented—the visual detection task—that directly assesses the degree of perceptual correlation between any two dimensions. Two experiments were conducted that assess the degree of perceptual correlation between form and color during the early stages of perceptual analysis. The results show that form and color are not perceptually independent. In addition, the pattern of perceptual correlation found indicates that form and color are not processed independently. The pattern of results constrains all models of early vision. A model of early vision based on active signal modulation is proposed.  相似文献   

17.
Explicit memory tests such as recognition typically access semantic, modality-independent representations, while perceptual implicit memory tests typically access presemantic, modality-specific representations. By demonstrating comparable cross- and within-modal priming using vision and haptics with verbal materials (Easton, Srinivas, & Greene, 1997), we recently questioned whether the representations underlying perceptual implicit tests were modality specific. Unlike vision and audition, with vision and haptics verbal information can be presented in geometric terms to both modalities. The present experiments extend this line of research by assessing implicit and explicit memory within and between vision and haptics in the nonverbal domain, using both 2-D patterns and 3-D objects. Implicit test results revealed robust cross-modal priming for both 2-D patterns and 3-D objects, indicating that vision and haptics shared abstract representations of object shape and structure. Explicit test results for 3-D objects revealed modality specificity, indicating that the recognition system keeps track of the modality through which an object is experienced.  相似文献   

18.
19.
Four observers performed matching, identification, and categorization with stimuli that varied along the integral dimensions: brightness and saturation. General recognition theory (F. G. Ashby & J. T. Townsend, 1986) was applied to quantify the separate influences of perceptual and decisional processes within and across tasks, with a focus on separating perceptual from decisional attention processes. Good accounts of the identification data were obtained from perceptual matching representation. This perceptual representation provided a good account of the categorization data, except when decisional selective attention to 1 stimulus dimension was required. Decisional selective attention reduced the attended-dimension perceptual variance relative to the unattended-dimension perceptual variance, with a larger reduction resulting when brightness, as opposed to saturation was attended. Implications for color vision research are discussed.  相似文献   

20.
In biological vision systems, attention mechanisms are responsible for selecting the relevant information from the sensed field of view, so that the complete scene can be analyzed using a sequence of rapid eye saccades. In recent years, efforts have been made to imitate such attention behavior in artificial vision systems, because it allows optimizing the computational resources as they can be focused on the processing of a set of selected regions. In the framework of mobile robotics navigation, this work proposes an artificial model where attention is deployed at the level of objects (visual landmarks) and where new processes for estimating bottom-up and top-down (target-based) saliency maps are employed. Bottom-up attention is implemented through a hierarchical process, whose final result is the perceptual grouping of the image content. The hierarchical grouping is applied using a Combinatorial Pyramid that represents each level of the hierarchy by a combinatorial map. The process takes into account both image regions (faces in the map) and edges (arcs in the map). Top-down attention searches for previously detected landmarks, enabling their re-detection when the robot presumes that it is revisiting a known location. Landmarks are described by a combinatorial submap; thus, this search is conducted through an error-tolerant submap isomorphism procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号