首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Current theoretical approaches to consciousness and vision associate the dorsal cortical pathway, in which magnocellular (M) input is dominant, with nonconscious visual processing and the ventral cortical pathway, in which parvocellular (P) input is dominant, with conscious visual processing. We explored the known differences between M and P contrast-response functions to investigate the roles of these channels in vision. Simulations of contrast-dependent priming revealed that priming effects obtained with unmasked, visible primes were best modeled by equations characteristic of M channel responses, whereas priming effects obtained with masked, invisible primes were best modeled by equations characteristic of P channel responses. In the context of current theoretical approaches to conscious and nonconscious processing, our results indicate a surprisingly significant role of M channels in conscious vision. In a broader discussion of the role of M channels in vision, we propose a neurophysiologically plausible interpretation of the present results: M channels indirectly contribute to conscious object vision via top-down modulation of reentrant activity in the ventral object-recognition stream.  相似文献   

2.
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual information is used to form these representations. Ideal observer analysis has demonstrated the advantages of studying vision from the perspective of explicit generative models and a specified visual task, which divides the causes of image variations into the separate categories of signal and noise. Classification image techniques estimate the visual information used in a task from the properties of “noise” images that interact most strongly with the task. Both ideal observer analysis and classification image techniques rely on the assumption of a generative model. We show here how the ability of the classification image approach to understand how an observer uses visual information can be improved by matching the type and dimensionality of the model to that of the neural representation or internal template being studied. Because image variation in real world object tasks can arise from both geometrical shape and photometric (illumination or material) changes, a realistic image generation process should model geometry as well as intensity. A simple example is used to demonstrate what we refer to as a “classification object” approach to studying three-dimensional object representations.  相似文献   

3.
Shapiro A  Lu ZL 《Psychological science》2011,22(11):1452-1459
One critical question regarding visual cognition concerns how the physical properties of the visual world are represented in early vision and then relayed to high-level vision. Here, we posit a simple theory: Processes that encode object appearance reduce their response to spatial content that is coarser than the size of the attended object. We show that a filtering procedure based on this theory can account for the relative brightness levels of test patches placed in images of natural scenes and for many hard-to-explain brightness illusions. The implication is that the perception of brightness differences in most brightness illusions actually corresponds to physical differences present in the images. Portions of the visual system may encode these physical differences by means of neural processes that adaptively reduce their response to low-spatial-frequency content.  相似文献   

4.
Controversy surrounds the question of whether the experience sometimes elicited by visual stimuli in blindsight (type-2 blindsight) is visual in nature or whether it is some sort of non-visual experience. The suggestion that the experience is visual seems, at face value, to make sense. I argue here, however, that the residual abilities found in type-1 blindsight (blindsight in which stimuli elicit no conscious experience) are not aspects of normal vision with consciousness deleted, but are based fragments of visual processes that, in themselves, would not be intelligible as visual experiences. If type-2 blindsight is a conscious manifestation of this residual function then it is not obvious that type-2 blindsight would be phenomenally like vision.  相似文献   

5.
To foveate a visual target, subjects usually execute a primary hypometric saccade (S1) bringing the target in perifoveal vision, followed by a corrective saccade (S2) or by more than one S2. It is still debated to what extent these S2 are pre-programmed or dependent only on post-saccadic retinal error. To answer this question, we used a visually-triggered saccade task in which target position and target visibility were manipulated. In one-third of the trials, the target was slightly displaced at S1 onset (so-called double step paradigm) and was maintained until the end of S1, until the start of the first S2 or until the end of the trial. Experiments took place in two visual environments: in the dark and in a dimly lit room with a visible random square background. The results showed that S2 were less accurate for shortest target durations. The duration of post-saccadic visual integration thus appears as the main factor responsible for corrective saccade accuracy. We also found that the visual context modulates primary saccade accuracy, especially for the most hypometric subjects. These findings suggest that the saccadic system is sensitive to the visual properties of the environment and uses different strategies to maintain final gaze accuracy.  相似文献   

6.
To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks’ object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.  相似文献   

7.
Schofield AJ 《Perception》2000,29(9):1071-1086
The human visual system is sensitive to both first-order variations in luminance and second-order variations in local contrast and texture. Although there is some debate about the nature of second-order vision and its relationship to first-order processing, there is now a body of results showing that they are processed separately. However, the amount, and nature, of second-order structure present in the natural environment is unclear. This is an important question because, if natural scenes contain little second-order structure in addition to first-order signals, the notion of a separate second-order system would lack ecological validity. Two models of second-order vision were applied to a number of well-calibrated natural images. Both models consisted of a first stage of oriented spatial filters followed by a rectifying nonlinearity and then a second set of filters. The models differed in terms of the connectivity between first-stage and second-stage filters. Output images taken from the models indicate that natural images do contain useful second-order structure. Specifically, the models reveal variations in texture and features defined by such variations. Areas of high contrast (but not necessarily high luminance) are also highlighted by the models. Second-order structure--as revealed by the models--did not correlate with the first-order profile of the images, suggesting that the two types of image 'content' may be statistically independent.  相似文献   

8.
Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.  相似文献   

9.
Three experiments were conducted in which visual information was manipulated either at the endpoint or during preselected, subject defined and constrained, experimenter-defined movements. In Experiments 1 and 2 the subject's task was to reproduce the movement in the absence of vision. Augmenting the terminal location of the criterion movement with vision had no differential effect on reproduction in Experiment 1, although preselected movement accuracy was significantly superior to constrained. Providing vision throughout the criterion movement in Experiment 2 not only failed to improve the accuracy of constrained movements but decreased reproduction performance in preselected movements. In Experiment 3 procedures were adopted to control the allocation of the subjects' attention during the criterion movement. The subjects reproduced by vision alone, movement alone, or with both visual and movement information available. When subjects were informed of the modality of reproduction prior to criterion presentation, they were able to ignore concurrent input from vision and attend to movement information. In the absence of precues visual information was spontaneously attended. The data were interpreted as contrary to closed-loop assumptions that additional information necessarily enhances the strength of a motor memory representation. Rather, they can be accommodated in terms of Posner, Nissen and Klein's (1976) theoretical account of visual dominance and serve to illustrate the importance of selective attention effects in movement coding.  相似文献   

10.
These experiments investigate the influence of frequency of occurrence of a visual stimulus (stimulus probability) on encoding processes, in an attempt to discover what sorts of mechanisms allow cognitive processes to modify perceptual processes. Experiments 1 and 2 show that frequently occurring visual letters do not facilitate encoding of visually similar letters. This implies that stimulus probability does not directly affect the feature detectors used in encoding the letters. Four more experiments provide evidence that stimulus probability has its effect on the availability of an abstract code that is generated by the encoding process from the visual input. Results from the experiments with letter stimuli could be interpreted using a model similar to the logogen model of Morton. Experiments with nonsense forms suggest that subjects use abstract codes in dealing with the forms only when the stimuli are constructed from a set of orthogonal features. A secondary finding was that visual quality has an effect that extends past the feature analysis stage and into a stage in which the visual input activities an abstract code. This result calls into question the common practice of interpreting the interaction of a factor with visual quality as evidence that the factor affects visual feature analysis.  相似文献   

11.
The question addressed in the present study was whether subjects (N = 24) can use visual information about their hand, in the first half of an aiming movement, to ensure optimal directional accuracy of their aiming movements. Four groups of subjects practiced an aiming task in either a complete vision condition, a no-vision condition, or in a condition in which their hand was visible for the first half [initial vision condition (IV)] or the second half of the movement [final vision condition (FV)]. Following 240 trials of acquisition, all subjects were submitted to a transfer test that consisted of 40 trials performed in a no-vision condition. The results indicated that seeing the hand early in movement did not help subjects to optimize either directional or amplitude accuracy. On the other hand, when subjects viewed their hand closer to the target, movements resulted that were as accurate as those performed under a complete vision condition. In transfer, withdrawing vision did not cause any increase in aiming error for the IV or the no-vision conditions. These results replicated those of Carlton (1981) and extended those of Bard and colleagues (Bard, Hay, & Fleury, 1985) in that they indicated that the kinetic visual channel hypothesized by Paillard (1980; Paillard & Amblard, 1985) appeared to be inoperative beyond 40deg of visual angle.  相似文献   

12.
In vision, the Gestalt principles of perceptual organization are generally well understood and remain a subject of detailed analysis. However, the possibility for a unified theory of grouping across visual and auditory modalities remains largely unexplored. Here we present examples of auditory and visual Gestalt grouping, which share important organizational properties. In particular, similarities are revealed between grouping processes in apparent motion, auditory streaming, and static 2-D displays. Given the substantial difference in the context, within which the phenomena in question occur (auditory vs. visual, static vs. dynamic), these similarities suggest that the dynamics of perceptual organization could be associated with a common (possibly central) mechanism. If the relevance of supramodal invariants of grouping is granted, the question arises as to whether they can be studied empirically. We propose that a “force-field” theory, based on a differential-geometric interpretation of perceptual space, could provide a suitable starting point for a systematic exploration of the subjective properties of certain classes of auditory and visual grouping phenomena.  相似文献   

13.
A head camera was used to examine the visual correlates of object name learning by toddlers as they played with novel objects and as the parent spontaneously named those objects. The toddlers’ learning of the object names was tested after play, and the visual properties of the head camera images during naming events associated with learned and unlearned object names were analyzed. Naming events associated with learning had a clear visual signature, one in which the visual information itself was clean and visual competition among objects was minimized. Moreover, for learned object names, the visual advantage of the named target over competitors was sustained, both before and after the heard name. The findings are discussed in terms of the visual and cognitive processes that may depend on clean sensory input for learning and also on the sensory–motor, cognitive, and social processes that may create these optimal visual moments for learning.  相似文献   

14.
There are three senses in which a visual stimulus may be said to persist psychologically for some time after its physical offset. First, neural activity in the visual system evoked by the stimulus may continue after stimulus offset (“neural persistence”). Second, the stimulus may continue to be visible for some time after its offset (“visible persistence”). Finally, information about visual properties of the stimulus may continue to be available to an observer for some time after stimulus offset (“informational persistence”). These three forms of visual persistence are widely assumed to reflect a single underlying process: a decaying visual trace that (1) consists of afteractivity in the visual system, (2) is visible, and (3) is the source of visual information in experiments on decaying visual memory. It is argued here that this assumption is incorrect. Studies of visible persistence are reviewed; seven different techniques that have been used for investigating visible persistence are identified, and it is pointed out that numerous studies using a variety of techniques have demonstrated two fundamental properties of visible persistence: theinverse duration effect (the longer a stimulus lasts, the shorter is its persistence after stimulus offset) and theinverse intensity effect (the more intense the stimulus, the briefer its persistence). Only when stimuli are so intense as to produce afterimages do these two effects fail to occur. Work on neural persistences is briefly reviewed; such persistences exist at the photoreceptor level and at various stages in the visual pathways. It is proposed that visible persistence depends upon both of these types of neural persistence; furthermore, there must be an additional neural locus, since a purely stereoscopic (and hence cortical) form of visible persistence exists. It is argued that informational persistence is defined by the use of the partial report methods introduced by Averbach and Coriell (1961) and Sperling (1960), and the term “iconic memory” is used to describe this form of persistence. Several studies of the effects of stimulus duration and stimulus intensity upon the duration of iconic memory have been carried out. Their results demonstrate that the duration of iconic memory is not inversely related to stimulus duration or stimulus intensity. It follows that informational persistence or iconic memory cannot be identified with visible persistence, since they have fundamentally different properties. One implication of this claim that one cannot investigate iconic memory by tasks that require the subject to make phenomenological judgments about the duration of a visual display. In other words, the so-called “direct methods” for studying iconic memory do not provide information about iconic memory. Another implication is that iconic memory is not intimately tied to processes going on in the visual system (as visible persistence is); provided a stimulus is adequately legible, its physical parameters have little influence upon its iconic memory. The paper concludes by pointing out that there exists an alternative to the usual view of iconic memory as a precategorical sensory buffer. According to this alternative, iconic memory is post-categorical, occurring subsequent to stimulus identification. Here, stimulus identification is considered to be a rapid automatic process which does not require buffer storage, but which provides no information about episodic properties of a visual stimulus. Information about these physical stimulus properties must, in some way, be temporarily attached to a representation of the stimulus in semantic memory; and it is this temporarily attached physical information which constitutes iconic memory.  相似文献   

15.
The effect of orientation on visual and tactual braille recognition   总被引:1,自引:0,他引:1  
M A Heller 《Perception》1987,16(3):291-298
Five experiments are reported in which subjects matched tangible or visible braille characters against either visual or tangible arrays. In both modalities recognition was impaired when the characters were tilted, but visual performance was superior to that for touch. Touch may be more sensitive than vision to tilt, since very small deviations from the upright decreased recognition accuracy. Orientation influenced pattern recognition with and without prior information about orientation. Tilting patterns slowed down recognition for tactual-visual matching, but only when orientation was studied with repeated measures. The results are consistent with the hypothesis that it is difficult to code braille patterns tactually as global outline shapes.  相似文献   

16.
A Kok 《Acta psychologica》1990,74(2-3):203-236
The present paper critically examines the contributions of Event-Related Potential (ERP) measures in mental chronometry research. It is argued that amplitude variations in ERP components may provide valuable information regarding the intensity and timing of information processes, and that these amplitude changes are related to energetical rather than to computational processes. It is also suggested that amplitude variations of ERP components in visual discrimination and selective attention tasks are caused by two different processing modes, denoted as external and internal control that are associated with different neural structures. It is further assumed that these two control systems converge upon thalamic neurons that regulate the sensory input to cortex, and that the direction of sustained ERP amplitude changes reflects which system is dominant. Recent ERP studies have shown that the effects of task variables related to motor control are manifested in a surprisingly early phase of the ERP waveform, and that these effects overlap in time with the effects of task variables related to input control. These findings suggest that at least in visual discrimination and selective attention tasks external and internal modes of processing may be activated in parallel.  相似文献   

17.
Traditional explanations of multistable visual phenomena (e.g. ambiguous figures, perceptual rivalry) suggest that the basis for spontaneous reversals in perception lies in antagonistic connectivity within the visual system. In this review, we suggest an alternative, albeit speculative, explanation for visual multistability – that spontaneous alternations reflect responses to active, programmed events initiated by brain areas that integrate sensory and non-sensory information to coordinate a diversity of behaviors. Much evidence suggests that perceptual reversals are themselves more closely related to the expression of a behavior than to passive sensory responses: (1) they are initiated spontaneously, often voluntarily, and are influenced by subjective variables such as attention and mood; (2) the alternation process is greatly facilitated with practice and compromised by lesions in non-visual cortical areas; (3) the alternation process has temporal dynamics similar to those of spontaneously initiated behaviors; (4) functional imaging reveals that brain areas associated with a variety of cognitive behaviors are specifically activated when vision becomes unstable. In this scheme, reorganizations of activity throughout the visual cortex, concurrent with perceptual reversals, are initiated by higher, largely non-sensory brain centers. Such direct intervention in the processing of the sensory input by brain structures associated with planning and motor programming might serve an important role in perceptual organization, particularly in aspects related to selective attention.  相似文献   

18.
Visual modules can be viewed as expressions of a marked analytic attitude in the study of vision. In vision psychology, this attitude is accompanied by hypotheses that characterize how visual modules are thought to operate in perceptual processes. Our thesis here is that there are what we call “intrinsic reasons” for the presence of such hypotheses in a vision theory, that is, reasons of a deductive kind, which are imposed by the partiality of the basic terms (input and output) in the definition of a module, and by peculiar characteristics of those terms. Specifically, we discuss three hypotheses of functional attributes: successive stages in the action of modules, residual indeterminacy of their effects, and the role of prior constraints. For each of the three, we indicate its occurrence in perceptual psychology, explain corresponding intrinsic reasons, and illustrate such reasons with examples.  相似文献   

19.
Blindsight and vision for action seem to be exemplars of unconscious visual processes. However, researchers have recently argued that blindsight is not really a kind of unconscious vision but is rather severely degraded conscious vision. Morten Overgaard and colleagues have recently developed new methods for measuring the visibility of visual stimuli. Studies using these methods show that reported clarity of visual stimuli correlates with accuracy in both normal individuals and blindsight patients. Vision for action has also come under scrutiny. Recent findings seem to show that information processed by the dorsal stream for online action contributes to visual awareness. Some interpret these results as showing that some dorsal stream processes are conscious visual processes (e.g., Gallese, 2007; Jacob & Jeannerod, 2003). The aim of this paper is to provide new support for the more traditional view that blindsight and vision for action are genuinely unconscious perceptual processes. I argue that individuals with blindsight do not have access to the kind of purely qualitative color and size information which normal individuals do. So, even though people with blindsight have a kind of cognitive consciousness, visual information processing in blindsight patients is not associated with a distinctly visual phenomenology. I argue further that while dorsal stream processing seems to contribute to visual awareness, only information processed by the early dorsal stream (V1, V2, and V3) is broadcast to working memory. Information processed by later parts of the dorsal stream (the parietal lobe) never reaches working memory and hence does not correlate with phenomenal awareness. I conclude that both blindsight and vision for action are genuinely unconscious visual processes.  相似文献   

20.
Multiresolution (orpyramid) approaches to computer vision provide the capability of rapidly detecting and extracting global structures (features, regions, patterns, etc.) from an image. The human visual system also is able to spontaneously (orpreattentively) perceive various types of global structure in visual input; this process is sometimes calledperceptual organization. This paper describes a set of pyramid-based algorithms that can detect and extract these types of structure; included are algorithms for inferring three-dimensional information from images and for processing time sequences of images. If implemented in parallel on cellular pyramid hardware, these algorithms require processing times on the order of the logarithm of the image diameter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号