首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition.  相似文献   

2.
The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved “online,” such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.  相似文献   

3.
An object's context may serve as a source of information for recognition when the object's image is degraded. The current study aimed to quantify this source of information. Stimuli were photographs of objects divided into quantized blocks. Participants decreased block size (increasing resolution) until identification. Critical resolution was compared across three conditions: (1) when the picture of the target object was shown in isolation, (2) in the object's contextual setting where that context was unfamiliar to the participant, and (3) where that context was familiar to the participant. A second experiment assessed the role of object familiarity without context. Results showed a profound effect of context: Participants identified objects in familiar contexts with minimal resolution. Unfamiliar contexts required higher-resolution images, but much less so than those without context. Experiment 2 found a much smaller effect of familiarity without context, suggesting that recognition in familiar contexts is primarily based on object-location memory.  相似文献   

4.
In a series of three experiments requiring selection of real objects for action, we investigated whether characteristics of the planned action and/or the “affordances” of target and distractor objects affected interference caused by distractors. In all ofthe experiments, the target object was selectedon the basis of colour and was presented alone or with a distractor object. We examined the effect of type of response (button press, grasping, or pointing), object affordances (compatibility with the acting hand, affordances for grasping or pointing), and target/distractor positions (left or right) on distractor interference (reaction time differences between trials with and without distractors). Different patterns of distractor interference were associated with different motor responses. In the button-press conditions of each experiment, distractor interference was largely determined by perceptual salience (e.g., proximity to initial visual fixation). In contrast, in tasks requiring action upon the objects in the array, distractors with handles caused greater interference than those without handles, irrespective of whether the intended action was pointing or grasping. Additionally, handled distractors were relatively more salient when their affordances for grasping were strong (handle direction compatible with the acting hand) than when affordances were weak. These data suggest that attentional highlighting of specific target and distractor features is a function of intended actions.  相似文献   

5.
Three experiments investigated the effects of familiarity, practice, and stimulus variability on naming latencies for photographs of objects. Latencies for pictures of objects having the same name decreased most with practice when the same picture was always used to represent a given object (Condition Ps-Ns), less if different views of the same object were used (Condition Pv-Ns), and least if pictures of different objects having the same name were used (Condition Pd-Ns). In all cases, however, the effect of practice was significant. The savings in naming latency associated with practice on Conditions Ps-Ns and Pv-Ns showed almost no transfer to condition Pd-Ns, even though the same responses were being given before and after transfer. However, practice on Condition Ps-Ns transferred completely to Condition Pv-Ns. Name frequency affected latency in all conditions. The frequency effect decreased slightly with practice.These results are related to several alternative models of the coding processes involved in naming objects. It is concluded that at least three types of representation may be necessary: visual codes, nonverbal semantic codes, and name codes. A distinction is made between visual codes that characterize two-dimensional stimuli and those that characterize three-dimensional objects.  相似文献   

6.
Balcetis E  Dale R 《Perception》2007,36(4):581-595
Four studies are reported which demonstrate that indirectly, loosely related information, otherwise known as conceptual set, modulates object identification. Studies 1A and 1B demonstrate the impact of indirect, nonspecific, non-perceptual, conceptual primes on the interpretation of ambiguous visual figures. Study 2 demonstrates that indirect, conceptual information (category of farm animals) biases identification without requiring the activation of direct perceptual information (here the image of a horse). Study 3 uses a non-linguistic dependent measure to address the alternative explanation that language and not perception mediates the relationship between incidental conceptual prime and biased object identification. These results suggest that conceptual set constrains object identification.  相似文献   

7.
Three experiments are reported that examined the relationship between covert visual attention and a viewer's ability to use extrafoveal visual information during object identification. Subjects looked at arrays of four objects while their eye movements were recorded. Their task was to identify the objects in the array for an immediate probe memory test. During viewing, the number and location of objects visible during given fixations were manipulated. In Experiments 1 and 2, we found that multiple extrafoveal previews of an object did not afford any more benefit than a single extrafoveal preview, as assessed by means of time of fixation on the objects. In Experiment 3, we found evidence for a model in which extrafoveal information acquired during a fixation derives primarily from the location toward which the eyes will move next. The results are discussed in terms of their implications for the relationship between covert visual attention and extrafoveal information use, and a sequential attention model is proposed.  相似文献   

8.
Object substitution masking (OSM) occurs when an initial display of a target and mask continues with the mask alone, creating a mismatch between the reentrant hypothesis, triggered by the initial display, and the ongoing low-level activity. We tested the proposition that the critical factor in OSM is not whether the mask remains in view after target offset, but whether the representation of the mask is sufficiently stronger than that of the target when the reentrant signal arrives. In Experiment 1, a variable interstimulus interval (ISI) was inserted between the initial display and the mask alone. The trailing mask was presumed to selectively boost the strength of the mask representation relative to that of the target. As predicted, OSM occurred at intermediate ISIs, at which the mask was presented before the arrival of the reentrant signal, creating a mismatch, but not at long ISIs, at which a comparison between the reentrant signal and the low-level activity had already been made. Experiment 2, conducted in dark-adapted viewing, ruled out the possibility that low-level inhibitory contour interactions (metacontrast masking) had played a significant role in Experiment 1. Metacontrast masking was further ruled out in Experiment 3, in which the masking contours were reduced to four small dots. We concluded that OSM does not depend on extended presentation of the mask alone, but on a mismatch between the reentrant signals and the ongoing activity at the lower level. The present results place constraints on estimates of the timing of reentrant signals involved in OSM.  相似文献   

9.
It has been demonstrated that the task-irrelevant left-right orientation of an object is capable of facilitating left-right-hand responses when the object is orientated towards the responding hand. We investigated the role of attention in this orientation effect. Experiment 1 showed that object orientation facilitates responses of the hand that is compatible with the object's orientation, despite the entire object being irrelevant. However, when a task-relevant fixation point was displayed over the prime object in Experiment 2, the effect was not observed. Together Experiments 1 and 2 suggest that the orientation information of viewed objects primes the action selection processes even when the object is irrelevant, but only when attention is not allocated to a competing stimulus during the prime presentation. Experiment 3 suggested that the elimination of the effect in Experiment 2 could not be attributed to the elimination of an attentional shift to the graspable part of the prime. Finally, Experiment 4 showed that object orientation can evoke an abstract response code, influencing the selection of finger responses.  相似文献   

10.
It has been demonstrated that the task-irrelevant left–right orientation of an object is capable of facilitating left–right-hand responses when the object is orientated towards the responding hand. We investigated the role of attention in this orientation effect. Experiment 1 showed that object orientation facilitates responses of the hand that is compatible with the object's orientation, despite the entire object being irrelevant. However, when a task-relevant fixation point was displayed over the prime object in Experiment 2, the effect was not observed. Together Experiments 1 and 2 suggest that the orientation information of viewed objects primes the action selection processes even when the object is irrelevant, but only when attention is not allocated to a competing stimulus during the prime presentation. Experiment 3 suggested that the elimination of the effect in Experiment 2 could not be attributed to the elimination of an attentional shift to the graspable part of the prime. Finally, Experiment 4 showed that object orientation can evoke an abstract response code, influencing the selection of finger responses.  相似文献   

11.
Object imagery refers to the ability to construct pictorial images of objects. Individuals with high object imagery (high-OI) produce more vivid mental images than individuals with low object imagery (low-OI), and they encode and process both mental images and visual stimuli in a more global and holistic way. In the present study, we investigated whether and how level of object imagery may affect the way in which individuals identify visual objects. High-OI and low-OI participants were asked to perform a visual identification task with spatially-filtered pictures of real objects. Each picture was presented at nine levels of filtering, starting from the most blurred (level 1: only low spatial frequencies—global configuration) and gradually adding high spatial frequencies up to the complete version (level 9: global configuration plus local and internal details). Our data showed that high-OI participants identified stimuli at a lower level of filtering than participants with low-OI, indicating that they were better able than low-OI participants to identify visual objects at lower spatial frequencies. Implications of the results and future developments are discussed.  相似文献   

12.
Studies using functional imaging show reliable activation of premotor cortex when observers view manipulable objects. This result has led to the view that knowledge of object function, particularly the actions associated with the typical use of objects, may play a causal role in object identification. To obtain relevant evidence regarding this causal role, we asked subjects to learn gesture-color associations and then attempt to identify objects presented in colors denoting functional gestures that were congruent or incongruent with the objects' use. A strong congruency effect was observed when subjects gestured the use of an object, but not when they named an object. We conclude that our procedure constitutes a sensitive measure of the recruitment and causal role of functional knowledge and that this recruitment is not present during object naming. Preliminary evidence, however, indicates that gestures evoked by the volumetric shape of an object do contribute to object naming.  相似文献   

13.
The basis for the category specific living things advantage in object recognition (i.e., faster and more accurate identification of living compared to nonliving things) was investigated in two experiments. It was hypothesised that the global shape of living things on average provides more information about their basic level identity than the global shape of nonliving things. In two experiments subjects performed name-picture or picture-name verification tasks, in which blurred or clear images of living and nonliving things were presented in either the right or the left visual hemifield. With blurred images, recognition performance was worst for nonliving things presented to the right visual field/left hemisphere, indicating that the lack of visual detail in the stimulus combined with a left hemisphere bias toward processing high frequency visual elements proved detrimental for processing nonliving stimuli in this condition. In addition, an overall living things advantage was observed in both experiments. This advantage was considerably larger with blurred images than with clear. These results are compatible with the global shape hypothesis and converge with evidence using other paradigms.  相似文献   

14.
This study contrasted the role of surfaces and volumetric shape primitives in three-dimensional object recognition. Observers (N?=?50) matched subsets of closed contour fragments, surfaces, or volumetric parts to whole novel objects during a whole–part matching task. Three factors were further manipulated: part viewpoint (either same or different between component parts and whole objects), surface occlusion (comparison parts contained either visible surfaces only, or a surface that was fully or partially occluded in the whole object), and target–distractor similarity. Similarity was varied in terms of systematic variation in nonaccidental (NAP) or metric (MP) properties of individual parts. Analysis of sensitivity (d′) showed a whole–part matching advantage for surface-based parts and volumes over closed contour fragments—but no benefit for volumetric parts over surfaces. We also found a performance cost in matching volumetric parts to wholes when the volumes showed surfaces that were occluded in the whole object. The same pattern was found for both same and different viewpoints, and regardless of target–distractor similarity. These findings challenge models in which recognition is mediated by volumetric part-based shape representations. Instead, we argue that the results are consistent with a surface-based model of high-level shape representation for recognition.  相似文献   

15.
The present study investigated whether sensitivity to object violations in perception as well as in action would vary with age. Five-, 6-, and 11-yr.-old children and adults solved tasks which involved perception only, motoric indication of parts, actual assembly of parts, and drawing of a violated figure. In perception, object violation was the only factor showing change across age groups, with violations being increasingly noticed. In composition tasks involving motor components, object violation was just one factor besides quantity of parts and type of segmentation contributing to task difficulty and showing increase in performance across age groups. Analysis of object violations in visual structure required abilities similar to those needed when analysing shape interference. Improved visual detection and graphic construction of object violation seemed not to occur because segmentation increased quantitatively but more likely because fast perceptual processes came under scrutiny.  相似文献   

16.
A theory of visual interpolation in object perception   总被引:10,自引:0,他引:10  
We describe a new theory explaining the perception of partly occluded objects and illusory figures, from both static and kinematic information, in a unified framework. Three ideas guide our approach. First, perception of partly occluded objects, perception of illusory figures, and some other object perception phenomena derive from a single boundary interpolation process. These phenomena differ only in respects that are not part of the unit formation process, such as the depth placement of units formed. Second, unit formation from static and kinematic information can be treated in the same general framework. Third, spatial and spatiotemporal discontinuities in the boundaries of optically projected areas are fundamental to the unit formation process. Consistent with these ideas, we develop a detailed theory of unit formation that accounts for most cases of boundary perception in the absence of local physical specification. According to this theory, discontinuities in the first derivative of projected edges are initiating conditions for unit formation. A formal notion of relatability is defined, specifying which physically given edges leading into discontinuities can be connected to others by interpolated edges. Intuitively, relatability requires that two edges be connectable by a smooth, monotonic curve. The roots of the discontinuity and relatability notions in ecological constraints on object perception are discussed. Finally, we elaborate our approach by discussing related issues, some new phenomena, connections to other approaches, and issues for future research.  相似文献   

17.
We contrasted visual search for targets presented in prototypical views and targets presented in nonprototypical views, when targets were defined by their names and when they were defined by the action that would normally be performed on them. The likelihood of the first fixation falling on the target was increased for prototypical-view targets falling in the lower visual field. When targets were defined by actions, the durations of fixations were reduced for targets in the lower field. The results are consistent with eye movements in search being affected by representations within the dorsal visual stream, where there is strong representation of the lower visual field. These representations are sensitive to the familiarity or the affordance offered by objects in prototypical views, and they are influenced by action-based templates for targets.  相似文献   

18.
Adventitiously blinded, congenitally blind, and sighted adults made relative distance judgments in a familiar environment under three sets of instructions—neutral with respect to the metric of comparison, euclidean (straight-line distance between landmarks), and functional (walking distance between landmarks). Analysis of error scores and multidimensional scaling procedures indicated that, although there were no significant differences among groups under functional instructions, all three groups differed from one another under euclidean instructions. Specifically, the sighted group performed best and the congenitally blind group worst, with the adventitiously blind group in between. The results are discussed in the context of the role of visual experience in spatial representation and the application of these methods for evaluating orientation and mobility training for the blind.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号