首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

2.
Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with pairs of objects that were or were not co-located for action. We first confirmed the reduced extinction effect for objects co-located for action. Consistent with prior results showing that inversion disrupts configural coding, we found that inversion disrupted the benefit for action-related object pairs. This occurred both for objects with a standard canonical orientation (e.g., teapot and teacup) and those without, but where grasping and using the objects was made more difficult by inversion (e.g., spanner and nut). The data suggest that part of the affordance effect may reflect a visuo-motor response to the configural relations between stimuli. Experiment 2 showed that distorting the relative sizes of the objects also reduced the advantage for action-related pairs. We conclude that action-related pairs are processed as configurations.  相似文献   

3.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0 ms, 120 ms, 400 ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

4.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0?ms, 120?ms, 400?ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

5.
Research has illustrated dissociations between "cognitive" and "action" systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use.  相似文献   

6.
通过探讨典型空间关系客体对中单个客体的视觉工作记忆,考察空间位置关系的解码特征和主动客体记忆优势。结果发现:(1)当客体以符合空间位置关系方式呈现时,记忆更准确;(2)符合空间位置关系条件下单个客体提取反应时更长,上方客体反应时更短,记忆更准确;(3)兼具空间位置关系与动作关系客体对中的主动客体记忆正确率更高。结果表明,现实场景客体空间分组中单个客体提取时存在解码现象和顺序效应,对现实中的上方与主动客体存在加工偏好。  相似文献   

7.
Vainio L  Symes E  Ellis R  Tucker M  Ottoboni G 《Cognition》2008,108(2):444-465
Recent evidence suggests that viewing a static prime object (a hand grasp), can activate action representations that affect the subsequent identification of graspable target objects. The present study explored whether stronger effects on target object identification would occur when the prime object (a hand grasp) was made more action-rich and dynamic. Of additional interest was whether this type of action prime would affect the generation of motor activity normally elicited by the target object. Three experiments demonstrated that grasp observation improved the identification of grasp-congruent target objects relative to grasp-incongruent target objects. We argue from this data that identifying a graspable object includes the processing of its action-related attributes. In addition, grasp observation was shown to influence the motor activity elicited by the target object, demonstrating interplay between action-based and object-based motor coding.  相似文献   

8.
9.
Spatial information is assumed to play a central, organizing role in object perception and to be an important ingredient of object representations. Here, evidence is provided to show that automatically integrated spatial object information is also functional in guiding spatial action. In particular, retrieving nonspatial information about a previewed object facilitates responses that spatially correspond to this object. This is true whether the object is still in sight or has already disappeared. So, forming an object representation entails the integration and storage of action-related information concerning the action that the object affords.  相似文献   

10.
In the present study, the authors examined the effect of tool use in a patient, MP, with neglect of peripersonal space. They found that target detection improved when the patient searched with his arm outstretched, when both visual and motor cues were present. Motor cues (arm outstretched but hidden from view) and visual cues alone (shining a torch on the objects) were less effective. In a final experiment, the authors reported that MP established a better memory for the objects that were searched for when a combined visual and motor cue was present. The authors argue that search was improved by combined visuomotor cuing, which was effective when the action could affect the objects present. Visuomotor cuing also led to stronger memories for searched locations, which reduced any tendency to reexamine positions that had been searched previously. The data are discussed in terms of the interaction between perception and action.  相似文献   

11.
Infants' intermodal perception of two levels of temporal structure uniting the visual and acoustic stimulation from natural, complex events was investigated in four experiments. Films depicting a single object (single, large marble) and a compound object (group of smaller marbles) colliding against a surface in an erratic pattern were presented to infants between 3 and months of age using an intermodal preference and search method. These stimulus events portrayed two levels of invariant temporal structure: (a) temporal synchrony united the sights and sounds of object impact, and (b) temporal microstructure, the internal temporal structure of each impact sound and motion, specified the composition of the object (single vs. compound). Experiment 1 demonstrated that by 6 months infants detected a relation between the audible and visible stimulation from these events when both levels of invariant temporal structure guided their intermodal exploration. Experiment 2 revealed that by 6 months infants detected the bimodal temporal microstructure specifying object composition. They looked predominantly to the film whose natural soundtrack was played even though the motions of objects in both films were synchronized with the soundtrack. Experiment 3 assessed infants' sensitivity to temporal synchrony relations. Two films depicting objects of the same composition were presented while the motions of only one of them was synchronized with the appropriate soundtrack. Both 6-month-olds showed evidence of detecting temporal synchrony relations under some conditions. Experiment 4 examined how temporal synchrony and temporal microstructure interact in directing intermodal exploration. The natural soundtrack to one of the objects was played out-of-synchrony with the motions of both. In contrast with the results of Experiment 2, infants at 6 months showed no evidence of detecting a relationship between the film and its appropriate soundtrack. This suggests that the temporal asynchrony disrupted their detection of the temporal microstructure specifying object composition. Results of these studies support on invariant-detection view of the development of intermodal perception.  相似文献   

12.
Five classes of relations between an object and its setting can characterize the organization of objects into real-world scenes. The relations are (1) Interposition (objects interrupt their background), (2) Support (objects tend to rest on surfaces), (3) Probability (objects tend to be found in some scenes but not others), (4) Position (given an object is probable in a scene, it often is found in some positions and not others), and (5) familiar Size (objects have a limited set of size relations with other objects). In two experiments subjects viewed brief (150 msec) presentations of slides of scenes in which an object in a cued location in the scene was either in a normal relation to its background or violated from one to three of the relations. Such objects appear to (1) have the background pass through them, (2) float in air, (3) be unlikely in that particular scene, (4) be in an inappropriate position, and (5) be too large or too small relative to the other objects in the scene. In Experiment I, subjects attempted to determine whether the cued object corresponded to a target object which had been specified in advance by name. With the exception of the Interposition violation, violation costs were incurred in that the detection of objects undergoing violations was less accurate and slower than when those same objects were in normal relations to their setting. However, the detection of objects in normal relations to their setting (innocent bystanders) was unaffected by the presence of another object undergoing a violation in that same setting. This indicates that the violation costs were incurred not because of an unsuccessful elicitation of a frame or schema for the scene but because properly formed frames interfered with (or did not facilitate) the perceptibility of objects undergoing violations. As the number of violations increased, target detectability generally decreased. Thus, the relations were accessed from the results of a single fixation and were available sufficiently early during the time course of scene perception to affect the perception of the objects in the scene. Contrary to expectations from a bottom-up account of scene perception, violations of the pervasive physical relations of Support and Interposition were not more disruptive on object detection than the semantic violations of Probability, Position and Size. These are termed semantic because they require access to the referential meaning of the object. In Experiment II, subjects attempted to detect the presence of the violations themselves. Violations of the semantic relations were detected more accurately than violations of Interposition and at least as accurately as violations of Support. As the number of violations increased, the detectability of the incongruities between an object and its setting increased. These results provide converging evidence that semantic relations can be accessed from the results of a single fixation. In both experiments information about Position was accessed at least as quickly as information on Probability. Thus in Experiment I, the interference that resulted from placing a fire hydrant in a kitchen was not greater than the interference from placing it on top of a mail ? in a street scene. Similarly, violations of Probability in Experiment II were not more detectable than violations of Position. Thus, the semantic relations which were accessed included information about the detailed interactions among the objects—information which is more specific than what can be inferred from the general setting. Access to the semantic relations among the entities in a scene is not deferred until the completion of spatial and depth processing and object identification. Instead, an object's semantic relations are accessed simultaneously with its physical relations as well as with its own identification.  相似文献   

13.
Consistency effects between objects in scenes   总被引:1,自引:0,他引:1  
How does context influence the perception of objects in scenes? Objects appear in a given setting with surrounding objects. Do objects in scenes exert contextual influences on each other? Do these influences interact with background consistency? In three experiments, we investigated the role of object-to-object context on object and scene perception. Objects (Experiments 1 and 3) and backgrounds (Experiment 2) were reported more accurately when the objects and their settings were consistent than when they were inconsistent, regardless of the number of foreground objects. In Experiment 3, related objects (from the same setting) were reported more accurately than were unrelated objects (from different settings), independently of consistency with the background. Consistent with an interactive model of scene processing, both object-to-object context and object-background context affect object perception.  相似文献   

14.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   

15.
We investigated whether the impact of an object's orientation on a perceiver's actions (an orientation effect) is moderated by the perceiver's ability to act on the object in question. To do this, we manipulated the physical location of presented objects (Experiment 1) and the perceiver's action capacity (Experiment 2). Regardless of the physical distance of the object, manual responses were sensitive to the object's orientation (the orientation effect) when the object was within the participant's action range but not when the object was outside of the action range. These results support an embodied view of object perception and shed light on peripersonal space representation.  相似文献   

16.
We present neuropsychological evidence indicating that action influences spatial perception. First, we review evidence indicating that actions using a tool can modulate unilateral visual neglect and extinction, where patients are unaware of stimuli presented on one side of space. We show that, at least for some patients, modulation comes about through a combination of visual and motor cueing of attention to the affected side (Experiment 1). Subsequently, we review evidence that action‐relations between stimuli reduce visual extinction; there is less extinction when stimuli fall in the correct colocations for action relative to when they fall in the incorrect relations for action and relative to when stimuli are just associatively related. Finally, we demonstrate that action relations between stimuli can also influence the binding of objects to space, in a patient with Balint's syndrome (Experiment 2). These neuropsychological data indicate that perception–action couplings can be crucial to our conscious representation of space.  相似文献   

17.
Memory for objects helps us to determine how we can most effectively and appropriately interact with them. This suggests a tightly coupled interplay between action and background knowledge. Three experiments demonstrate that grasping circumference can be affected by the size of a visual stimulus (Experiment 1), whether that stimulus appears to be graspable (Experiment 2), and the presence of a label that renders that object ungraspable (Experiment 3). The results are taken to inform theories on conceptual representation and the functional distinction that has been drawn between the visual systems for perception and action.  相似文献   

18.
Across many areas of study in cognition, the capacity of working memory (WM) is widely agreed to be roughly three to five items: three to five objects (i.e., bound collections of object features) in the literature on visual WM or three to five role bindings (i.e., objects in specific relational roles) in the literature on memory and reasoning. Three experiments investigated the capacity of observers’ WM for the spatial relations among objects in a visual display, and the results suggest that the “items” in WM are neither simply objects nor simply role bindings. The results of Experiment 1 are most consistent with a model that treats an “item” in visual WM as an object, along with the roles of all its relations to one other object. Experiment 2 compared observers’ WM for object size with their memory for relative size and provided evidence that observers compute and store objects’ relations per se (rather than just absolute size) in WM. Experiment 3 tested and confirmed several more nuanced predictions of the model supported by Experiment 1. Together, these findings suggest that objects are stored in visual WM in pairs (along with all the relations between the objects in a pair) and that, from the perspective of WM, a given object in one pair is not the same “item” as that same object in a different pair.  相似文献   

19.
The action-specific account of spatial perception asserts that a perceiver’s ability to perform an action, such as hitting a softball or walking up a hill, impacts the visual perception of the target object. Although much evidence is consistent with this claim, the evidence has been challenged as to whether perception is truly impacted, as opposed to the responses themselves. These challenges have recently been organized as six pitfalls that provide a framework with which to evaluate the empirical evidence. Four case studies of action-specific effects are offered as evidence that meets the framework’s high bar, and thus that demonstrates genuine perceptual effects. That action influences spatial perception is evidence that perceptual and action-related processes are intricately and bidirectionally linked.  相似文献   

20.
The repetition blindness (RB) effect demonstrates that people often fail to detect the second presentation of an identical object (e.g., Kanwisher, 1987). Grouping of identical items is a well-documented perceptual phenomenon, and this grouping generally facilitates perception. These two effects pose a puzzle: RB impairs perception, while perceptual grouping improves it. Here, we combined these two effects and studied how they interact. In a series of three experiments, we presented repeated items in a simultaneous string, while manipulating the organization of the repeated items in groups within a string. We observed an interaction between RB and grouping that we summarize with a rule that we call “the survival of the grouped”: In essence, the ability to group repeated elements protects them from RB. These findings are discussed within the framework of the object file theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号