首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers can search and track within the same trial significantly better than would be predicted if the 2 tasks were mutually exclusive. In fact, the AOC for tracking and search is similar to that for tracking and auditory monitoring. The results of additional experiments support an attention-switching account for this high level of dual-task performance in which a single attentional resource is efficiently switched between the tracking and search. The results provide important constraints for architectures of visual selective attention and the mechanisms of multielement tracking.  相似文献   

2.
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye’s images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81–93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) “popped out,” producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.  相似文献   

3.
Previous studies of tactile spatial perception focussed either on a single point of stimulation, on local patterns within a single skin region such as the fingertip, on tactile motion, or on active touch. It remains unclear whether we should speak of a tactile field, analogous to the visual field, and supporting spatial relations between stimulus locations. Here we investigate this question by studying perception of large-scale tactile spatial patterns on the hand, arm and back. Experiment 1 investigated the relation between perception of tactile patterns and the identification of subsets of those patterns. The results suggest that perception of tactile spatial patterns is based on representing the spatial relations between locations of individual stimuli. Experiment 2 investigated the spatial and temporal organising principles underlying these relations. Experiment 3 showed that tactile pattern perception makes reference to structural representations of the body, such as body parts separated by joints. Experiment 4 found that precision of pattern perception is poorer for tactile patterns that extend across the midline, compared to unilateral patterns. Overall, the results suggest that the human sense of touch involves a tactile field, analogous to the visual field. The tactile field supports computation of spatial relations between individual stimulus locations, and thus underlies tactile pattern perception.  相似文献   

4.
We investigated the effects of seen and unseen within-hemifield posture changes on crossmodal visual–tactile links in covert spatial attention. In all experiments, a spatially nonpredictive tactile cue was presented to the left or the right hand, with the two hands placed symmetrically across the midline. Shortly after a tactile cue, a visual target appeared at one of two eccentricities within either of the hemifields. For half of the trial blocks, the hands were aligned with the inner visual target locations, and for the remainder, the hands were aligned with the outer target locations. In Experiments 1 and 2, the inner and outer eccentricities were 17.5º and 52.5º, respectively. In Experiment 1, the arms were completely covered, and visual up–down judgments were better when on the same side as the preceding tactile cue. Cueing effects were not significantly affected by hand or target alignment. In Experiment 2, the arms were in view, and now some target responses were affected by cue alignment: Cueing for outer targets was only significant when the hands were aligned with them. In Experiment 3, we tested whether any unseen posture changes could alter the cueing effects, by widely separating the inner and outer target eccentricities (now 10º and 86º). In this case, hand alignment did affect some of the cueing effects: Cueing for outer targets was now only significant when the hands were in the outer position. Although these results confirm that proprioception can, in some cases, influence tactile–visual links in exogenous spatial attention, they also show that spatial precision is severely limited, especially when posture is unseen.  相似文献   

5.
The perceptual field is a cardinal concept of sensory psychology. 'Field' refers to a representation in which perceptual contents have spatial properties and relations which derive from the spatial properties and relations of corresponding stimuli. It is a matter of debate whether a perceptual field exists in touch analogous to the visual field. To study this issue, we investigated whether tactile stimuli on the palm can be perceived as complex stimulus patterns, according to basic spatial principles. Subjects judged the intensity of a target stimulus to the palm, ignoring two brief preceding touches at nearby flanker locations. We found that the judgements of the target intensity were boosted by flankers when the target lay on the line joining the flankers in comparison to when the target lay away from this line. Therefore, we suggest that a tactile spatial organisation, i.e. a tactile field, exists; the field supports the relation of collinearity; it is automatically and implicitly activated by touch, and it groups spatially coherent perceptual contents.  相似文献   

6.
Experiments conducted in two independent laboratories indicate that the correction of refractive errors does not improve peripheral visual acuity. This finding contrasts with previous results for motion detection and other visual functions in the periphery. The “two visual systems” hypothesis provides a heuristic means of interpreting this apparent discrepancy.  相似文献   

7.
8.
This study examined how spatial working memory and visual (object) working memory interact, focusing on two related questions: First, can these systems function independently from one another? Second, under what conditions do they operate together? In a dual-task paradigm, participants attempted to remember locations in a spatial working memory task and colored objects in a visual working memory task. Memory for the locations and objects was subject to independent working memory storage limits, which indicates that spatial and visual working memory can function independently from one another. However, additional experiments revealed that spatial working memory and visual working memory interact in three memory contexts: when retaining (1) shapes, (2) integrated color-shape objects, and (3) colored objects at specific locations. These results suggest that spatial working memory is needed to bind colors and shapes into integrated object representations in visual working memory. Further, this study reveals a set of conditions in which spatial and visual working memory can be isolated from one another.  相似文献   

9.
Two experiments tested humans on a memory for duration task based on the method of Wearden and Ferrara (1993), which had previously provided evidence for subjective shortening in memory for stimulus duration. Auditory stimuli were tones (filled) or click-defined intervals (unfilled). Filled visual stimuli were either squares or lines, with the unfilled interval being the time between two line presentations. In Experiment 1, good evidence for subjective shortening was found when filled and unfilled visual stimuli, or filled auditory stimuli, were used, but evidence for subjective shortening with unfilled auditory stimuli was more ambiguous. Experiment 2 used a simplified variant of the Wearden and Ferrara task, and evidence for subjective shortening was obtained from all four stimulus types.  相似文献   

10.
Abstract: Tactile vertical, defined as the edge orientation that participants perceive to be vertical, was examined in four experiments. In Experiment 1, we touched the participants’ cheek, lips, or hand with an edge and asked them to judge its orientation with regard to gravitational vertical, both when the stimulated body part was upright (or, in the case of the lips, aligned), and when it was tilted (lips, distorted). We found that when the head or hand was tilted forward 30°, or when the lower lip was distorted approximately 38° to the left or right, the tactile vertical shifted in the same direction by only a fraction (8.7, 8.6, and 36.3% for the cheek, lips, and hand, respectively) of the change in orientation of the stimulated region. The results indicated considerable, but usually incomplete, orientation constancy. In Experiment 2, we measured tactile vertical on the hand for forward tilts from 0° to 45°. We found that as the hand was tilted, the tactile vertical increasingly shifted in the same direction as the hand (i.e., a tactile Aubert effect). In Experiment 3, the effect of attentional focus on tactile vertical was examined by comparing the tactile vertical of participants who attended to body‐centered coordinates, and others who attended to gravitation‐centered coordinates. We found that focusing on body‐centered coordinates caused a decrease in orientation constancy. We sought to examine the role of attention further in Experiment 4, measuring tactile vertical on the cheek of persons with temporomandibular disorders. Compared with normal participants, these participants displayed significantly lower constancy. The results were accounted for by a narrowing of attention to painful signals, so that proprioceptive information was attended to less. In conclusion, the degree of tactile orientation constancy that participants demonstrate varies as a function of body site and attentional focus.  相似文献   

11.
Two experiments tested humans on a memory for duration task based on the method of Wearden and Ferrara (1993) Wearden, J. H. and Ferrara, A. 1993. Subjective shortening in humans' memory for stimulus duration. Quarterly Journal of Experimental Psychology, 46B: 163186.  [Google Scholar], which had previously provided evidence for subjective shortening in memory for stimulus duration. Auditory stimuli were tones (filled) or click-defined intervals (unfilled). Filled visual stimuli were either squares or lines, with the unfilled interval being the time between two line presentations. In Experiment 1, good evidence for subjective shortening was found when filled and unfilled visual stimuli, or filled auditory stimuli, were used, but evidence for subjective shortening with unfilled auditory stimuli was more ambiguous. Experiment 2 used a simplified variant of the Wearden and Ferrara task, and evidence for subjective shortening was obtained from all four stimulus types.  相似文献   

12.
Lucia M. Vaina 《Synthese》1990,83(1):49-91
In this paper we focus on the modularity of visual functions in the human visual cortex, that is, the specific problems that the visual system must solve in order to achieve recognition of objects and visual space. The computational theory of early visual functions is briefly reviewed and is then used as a basis for suggesting computational constraints on the higher-level visual computations. The remainder of the paper presents neurological evidence for the existence of two visual systems in man, one specialized for spatial vision and the other for object vision. We show further clinical evidence for the computational hypothesis that these two systems consist of several visual modules, some of which can be isolated on the basis of specific visual deficits which occur after lesions to selected areas in the visually responsive brain. We will provide examples of visual modules which solve information processing tasks that are mediated by specific anatomic areas. We will show that the clinical data from behavioral studies of monkeys (Ungerleider and Mishkin 1984) supports the distinction between two visual systems in monkeys, the what system, involved in object vision, and the where system, involved in spatial vision.I thank Carole Graybill for editorial help.  相似文献   

13.
In general, humans have impressive recognition memory for previously viewed pictures. Many people spend years becoming experts in highly specialized image sets. For example, cytologists are experts at searching micrographs filled with potentially cancerous cells and radiologists are expert at searching mammograms for indications of cancer. Do these experts develop robust visual long-term memory for their domain of expertise? If so, is this expertise specific to the trained image class, or do such experts possess generally superior visual memory? We tested recognition memory of cytologists, radiologists, and controls with no medical experience for three visual stimulus classes: isolated objects, scenes, and mammograms or micrographs. Experts were better than control observers at recognizing images from their domain, but their memory for those images was not particularly good (D’ ~ 1.0) and was much worse than memory for objects or scenes (D’ > 2.0). Furthermore, experts were not better at recognizing scenes or isolated objects than control observers.  相似文献   

14.
Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.  相似文献   

15.
We report a study designed to investigate the extent to which speeded behavioral responses following tactile stimulation are influenced by differences in neural conduction latencies at different body sites and/or by the characteristics of the compatibility between the cue and effector. The results showed that it may not be particularly desirable to present tactile cues (e.g., warning signals) to an interface operator's feet if a speeded foot response is required, for even though such an arrangement maximizes the set-level compatibility between the stimulus and the response, it turns out that response latencies are primarily determined by conduction latencies through the peripheral nervous system.  相似文献   

16.
In the present exploratory study based on 7 subjects, we examined the composition of magnetoencephalographic (MEG) brain oscillations induced by the presentation of an auditory, visual, and audio-visual stimulus (a talking face) using an oddball paradigm. The composition of brain oscillations were assessed here by analyzing the probability-classification of short-term MEG spectral patterns. The probability index for particular brain oscillations being elicited was dependent on the type and the modality of the sensory percept. The maintenance of the integrated audio-visual percept was accompanied by the unique composition of distributed brain oscillations typical of auditory and visual modality, and the contribution of brain oscillations characteristic for visual modality was dominant. Oscillations around 20 Hz were characteristic for the maintenance of integrated audio-visual percept. Identifying the actual composition of brain oscillations allowed us (1) to distinguish two subjectively/consciously identical mental percepts, and (2) to characterize the types of brain functions involved in the maintenance of the multi-sensory percept.  相似文献   

17.
This article begins with reviews of parallel processing models in the areas of visual perception and memory, pointing out kinds of information purported to be processed in each, and the overlap in the physiological substrates involved. Next, some pertinent literature having to do with the linkage between perception and memory is reviewed (e.g., visual memory for what or where), concluding that there exists a serious lack of research and knowledge of how different perceptual processes may lead to facilitated, distorted or impaired memory in different forms of storage. Some possible scenarios are presented concerning how perceptual information might be interfaced with memorial mechanisms, and some working hypotheses are considered. Finally, a new paradigm is outlined that examines the linkage between local and global perceptual processing and explicit and implicit learning. This paradigm combines the global precedence paradigm of Navon (1977; 1981) and the sequence learning paradigm of Nissen and Bullemer (1987). Convincing arguments indicate that global stimuli are mediated more quickly via one perceptual stream (the M-cell pathway), but can be processed more slowly by another (the P-cell system). Local aspects of the stimuli are exclusively mediated by the P-cell system. The results of two experiments employing iterations of stimulus sequence, in which sequence learning is possible and measurable in terms of reaction time changes over trials are presented. The second experiment indicates that information thought to be mediated by the M-cell pathway results in incidental sequential learning, while other information thought to be mediated by the P-cell pathway does not. Spatial filtering of the visual stimuli reveals that low spatial frequencies are necessary for sequence learning to occur. The issue of whether this learning is implicit or explicit is also discussed. Ideas for future research, exploring this new area of interest, are proposed. Current knowledge of perceptual and memorial deficits in special populations are considered in an attempt to identify new areas of investigation.  相似文献   

18.
We investigated the neurobiologies! basis of visual processes involved in object enumeration. Subitizing, the ability to rapidly and accurately enumerate four or fewer objects, is thought to depend on preattentive processing of visual stimuli, whereas counting of more numerous objects is thought to require serial shifts of attention. We attempted to distinguish between the hypothesis that the magnocellular (M) visual pathway is the preferential route for subitizing, and the alternative hypothesis that there is no selectivity for the M pathway or its counterpart, the parvocellular (P) visual pathway, in visual object enumeration. Green rectangles were presented on an equiluminant red background to impair M pathway processing. This slowed enumeration performance relative to a control condition in which object/background luminance differed, especially when the rectangles were relatively large and widely spaced and had constant retinal eccentricity. When low luminance contrast was used to impair processing along the P pathway, enumeration performance was slowed relative to a high-contrast control condition, especially when the rectangles were small and closely spaced. Overall, our manipulations affected enumeration performance without selectivity for subitizing or counting ranges and without altering the slope of the functions relating reaction time to numerosity. Thus, our results favor the hypothesis that visual enumeration does not depend preferentially on either the M or the P pathway.  相似文献   

19.
Processing visually degraded stimuli is a common experience. We struggle to find house keys on dim front porches, to decipher slides projected in overly bright seminar rooms, and to read 10th-generation photocopies. In this research, we focus specifically on stimuli that are degraded via reduction of stimuluscontrast and address two questions. First, why is it difficult to process low-contrast, as compared with high-contrast, stimuli? Second, is the effect of contrastfundamental in that its effect is independent of the stimulus being processed and the reason for processing the stimulus? We formally address and answer these questions within the context of a series of nested theories, each providing a successively stronger definition of what it means for contrast to affect perception and memory. To evaluate the theories, we carried out six experiments. Experiments 1 and 2 involved simple stimuli (randomly generated forms and digit strings), whereas Experiments 3–6 involved naturalistic pictures (faces, houses, and cityscapes). The stimuli were presented at two contrast levels and at varying exposure durations. The data from all the experiments allow the conclusion that some function of stimulus contrast combines multiplicatively with stimulus duration at a stage prior to that at which the nature of the stimulus and the reason for processing it are determined, and it is the result of this multiplicative combination that determines eventual memory performance. We describe a stronger version of this theory— the sensory response, information acquisition theory—which has at its core, the strong Bloch’s-law-like assumption of a fundamental visual system response that is proportional to the product of stimulus contrast and stimulus duration. This theory was, as it has been in the past, highly successful in accounting for memory for simple stimuli shown at short (i.e., shorter than an eye fixation) durations. However, it was less successful in accounting for data from short-duration naturalistic pictures and was entirely unsuccessful in accounting for data from naturalistic pictures shown at longer durations. We discuss (1) processing differences between short- and long-duration stimuli, (2) processing differences between simple stimuli, such as digits, and complex stimuli, such as pictures, (3) processing differences between biluminant stimuli (such as line drawings with only two luminance levels) and multiluminant stimuli (such as grayscale pictures with multiple luminance levels), and (4) Bloch’s law and a proposed generalization of the concept ofmetamers.  相似文献   

20.
This study reports on the auditory and visual comprehension of Japanese idioms having both literal and figurative meanings. Experiment I conducted the rating of the semantic distance between the two meanings. Experiment II investigated the difference of comprehension between semantically far and close idioms. Here the materials are presented in isolation both auditorily and visually. Experiment III conducted the same investigation as Experiment II, except that idioms were presented embedded in literally and figuratively induced contexts. Experiment IV reinvestigated the findings obtained from the previous experiments. The results of these experiments show that in isolation visual presentation precedes auditory presentation, and that both in the auditory and visual presentations semantically far idioms are comprehended more accurately than semantically close idioms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号