首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In general, humans have impressive recognition memory for previously viewed pictures. Many people spend years becoming experts in highly specialized image sets. For example, cytologists are experts at searching micrographs filled with potentially cancerous cells and radiologists are expert at searching mammograms for indications of cancer. Do these experts develop robust visual long-term memory for their domain of expertise? If so, is this expertise specific to the trained image class, or do such experts possess generally superior visual memory? We tested recognition memory of cytologists, radiologists, and controls with no medical experience for three visual stimulus classes: isolated objects, scenes, and mammograms or micrographs. Experts were better than control observers at recognizing images from their domain, but their memory for those images was not particularly good (D’ ~ 1.0) and was much worse than memory for objects or scenes (D’ > 2.0). Furthermore, experts were not better at recognizing scenes or isolated objects than control observers.  相似文献   

2.
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers can search and track within the same trial significantly better than would be predicted if the 2 tasks were mutually exclusive. In fact, the AOC for tracking and search is similar to that for tracking and auditory monitoring. The results of additional experiments support an attention-switching account for this high level of dual-task performance in which a single attentional resource is efficiently switched between the tracking and search. The results provide important constraints for architectures of visual selective attention and the mechanisms of multielement tracking.  相似文献   

3.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

4.
Is visual imagery really visual? Overlooked evidence from neuropsychology   总被引:5,自引:0,他引:5  
  相似文献   

5.
Although many neuroimaging studies of visual mental imagery have revealed activation in early visual cortex (Areas 17 or 18), many others have not. The authors review this literature and compare how well 3 models explain the disparate results. Each study was coded 1 or 0, indicating whether activation in early visual cortex was observed, and sets of variables associated with each model were fit to the observed results using logistic regression analysis. Three variables predicted all of the systematic differences in the probability of activation across studies. Two of these variables were identified with a perceptual anticipation theory, and the other was identified with a methodological factors theory. Thus, the variability in the literature is not random.  相似文献   

6.
Kaski D 《Perception》2002,31(6):717-731
Vision is the most highly developed sense in man and represents the doorway through which most of our knowledge of the external world arises. Visual imagery can be defined as the representation of perceptual information in the absence of visual input. Visual imagery has been shown to complement vision in this acquisition of knowledge--it is used in memory retrieval, problem solving, and the recognition of properties of objects. The processes underlying visual imagery have been assimilated to those of the visual system and are believed to share a neural substrate. However, results from studies in congenitally and cortically blind subjects have opposed this hypothesis. Here I review the currently available evidence.  相似文献   

7.
Research has shown that performing visual search while maintaining representations in visual working memory displaces up to one object's worth of information from memory. This memory displacement has previously been attributed to a nonspecific disruption of the memory representation by the mere presentation of the visual search array, and the goal of the present study was to determine whether it instead reflects the use of visual working memory in the actual search process. The first hypothesis tested was that working memory displacement occurs because observers preemptively discard about an object's worth of information from visual working memory in anticipation of performing visual search. Second, we tested the hypothesis that on target absent trials no information is displaced from visual working memory because no target is entered into memory when search is completed. Finally, we tested whether visual working memory displacement is due to the need to select a response to the search array. The findings rule out these alternative explanations. The present study supports the hypothesis that change-detection performance is impaired when a search array appears during the retention interval due to nonspecific disruption or masking.  相似文献   

8.
9.
A chimpanzee acquired an auditory–visual intermodal matching-to-sample (AVMTS) task, in which, following the presentation of a sample sound, the subject had to select from two alternatives a photograph that corresponded to the sample. The acquired AVMTS performance might shed light on chimpanzee intermodal cognition, which is one of the least understood aspects in chimpanzee cognition. The first aim of this paper was to describe the training process of the task. The second aim was to describe through a series of experiments the features of the chimpanzee AVMTS performance in comparison with results obtained in a visual intramodal matching task, in which a visual stimulus alone served as the sample. The results show that the acquisition of AVMTS was facilitated by the alternation of auditory presentation and audio-visual presentation (i.e., the sample sound together with a visual presentation of the object producing the particular sample sound). Once AVMTS performance was established for the limited number of stimulus sets, the subject showed rapid transfer of the performance to novel sets. However, the subject showed a steep decay of matching performance as a function of the delay interval between the sample and the choice alternative presentations when the sound alone, but not the visual stimulus alone, served as the sample. This might suggest a cognitive limitation for the chimpanzee in auditory-related tasks. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   

10.
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape–colour pair (from outside the experimental set, i.e., “pink square”); (b) a pair of unrelated but visually imageable, concrete, words (i.e., “big elephant”); (c) a pair of unrelated and abstract words (i.e., “critical event”); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.  相似文献   

11.
When two dot arrays are briefly presented, separated by a short interval of time, visual short-term memory of the first array is disrupted if the interval between arrays is shorter than 1300-1500 ms (Brockmole, Wang, & Irwin, 2002). Here we investigated whether such a time window was triggered by the necessity to integrate arrays. Using a probe task we removed the need for integration but retained the requirement to represent the images. We found that a long time window was needed for performance to reach asymptote even when integration across images was not required. Furthermore, such window was lengthened if subjects had to remember the locations of the second array, but not if they only conducted a visual search among it. We suggest that a temporal window is required for consolidation of the first array, which is vulnerable to disruption by subsequent images that also need to be memorized.  相似文献   

12.
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.  相似文献   

13.
Experiments conducted in two independent laboratories indicate that the correction of refractive errors does not improve peripheral visual acuity. This finding contrasts with previous results for motion detection and other visual functions in the periphery. The “two visual systems” hypothesis provides a heuristic means of interpreting this apparent discrepancy.  相似文献   

14.
Visual environments contain many cues to properties of an observed scene. To integrate information provided by multiple cues in an efficient manner, observers must assess the degree to which each cue provides reliable versus unreliable information. Two hypotheses are reviewed regarding how observers estimate cue reliabilities, namely that the estimated reliability of a cue is related to the ambiguity of the cue, and that people use correlations among cues to estimate cue reliabilities. Cue reliabilities are shown to be important both for cue combination and for aspects of visual learning.  相似文献   

15.
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye’s images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81–93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) “popped out,” producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.  相似文献   

16.
17.
A brief display that is clearly visible when shown alone can be rendered invisible by the subsequent presentation of a second visual stimulus. Several recently described backward masking effects are not predicted by current theories of visual masking, including masking by four small dots that surround (but do not touch) a target object and masking by a surrounding object that remains on display after the target object has been turned off. A crucial factor in both of these effects is attention: almost no masking occurs if attention can be rapidly focused on the target, whereas powerful masking ensues if attention directed at the target is delayed. A new theory of visual masking, inspired by developments in neuroscience, can account for these effects, as well as more traditional masking effects. In addition, the new theory sheds light on related research, such as the attentional blink, inattentional blindness and change blindness.  相似文献   

18.
It is widely accepted within contemporary philosophy of perception that the content of visual states cannot be characterized simply as a list of represented features. This is because such characterization leads to the so-called ‘Many Properties’ problem, that is, it does not allow us to explain how the visual system is able to distinguish between scenes containing different arrangements of the same features. The usual solution to the Many Properties problem is to characterize some basic elements of content as subjects, to which features are attributed by a predication-like relation. In this paper, I reconsider this solution and claim that the Many Properties problem can be solved without postulating such subjects. What is more, I argue that an alternative approach has stronger justification given the empirical data concerning human vision.  相似文献   

19.
In long jumping, athletes need to hit a take-off board with both high precision and high run-up velocity to leap as far as possible. It is commonly agreed that visual regulation plays a crucial role in long jumping. To identify visual regulation, researchers have typically relied on analyses of variability in step parameters (i.e., “gait-based visual regulation”). The aim of the current study was to examine whether gait-based visual regulation coincides with measures of actual gaze control, referred to as “gaze-based visual regulation”. Therefore, 15 participants performed long jumps and run-throughs while wearing a mobile eye-tracker. To compare gait-based with gaze-based visual regulation, a digital camera recorded all trials for subsequent frame-by-frame analyses of step parameters. Results revealed that gait-based visual regulation coincided with the step of the longest gaze (i.e., dwell time) on the take-off board but not with the step of initial gaze on take-off board. This finding supports the notion of visuomotor control of motor variability by means of longer gazing periods at the take-off board. In addition, our results provide initial insights to coaches and athletes on the particular requirements of visual regulation and the relationship between gait and gaze in the long jump approach.  相似文献   

20.
Does attention affect visual feature integration?   总被引:4,自引:0,他引:4  
Two questions are investigated in this work: first, whether the integration of color and shape information is affected by attending to the stimulus location, and second, whether attending to a stimulus location enhances the perceptual representation of the stimulus or merely affects decision processes. In three experiments, subjects were briefly presented with colored letters. On most trials, subjects were precued to the stimulus location (valid cue); on some trials, a nonstimulus location was cued (invalid cue). Subjects were less likely to incorrectly combine colors and letter shapes following a valid cue. The attentional facilitation afforded by the cue was not limited to feature integration but also affected the registration of features. However, when the amount of feature information was strictly controlled, attention still affected feature integration. The results indicate that orienting attention to the location of the cue affects the quality of the perceptual representation for features and their integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号