首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Five experiments were conducted to examine how perceived direction of motion is influenced by aspects of shape of a moving object such as symmetry and elongation. Random polygons moving obliquely were presented on a computer screen and perceived direction of motion was measured. Experiments 1 and 2 showed that a symmetric object moving off the axis of symmetry caused motion to be perceived as more aligned with the axis than it actually was. However, Experiment 3 showed that motion did not influence perceived orientation of symmetry axis. Experiment 4 revealed that symmetric shapes resulted in faster judgments on direction of motion than asymmetric shapes only when the motion is along the axis. Experiment 5 showed that elongation causes a bias in perceived direction of motion similar to effects of symmetry. Existence of such biases is consistent with the hypothesis that in the course of evolution, the visual system has been adapted to regularities of motion in the animate world.  相似文献   

2.
Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and global shape information, color, textural and featural information, (2) the same rich and prototypical shapes but no color, texture or surface featural information, or (3) that presented only abstract and global representations of object shape in terms of geometric volumes. Significant developmental differences were observed only for the abstract shape representations in terms of geometric volumes, the kind of shape representation that has been hypothesized to underlie mature object recognition. Further, these differences were strongly linked in individual children to the number of object names in their productive vocabulary. Experiment 2 replicated these results and showed further that the less advanced children's object recognition was based on the piecemeal use of individual features and parts, rather than overall shape. The results provide further evidence for significant and rapid developmental changes in object recognition during the same period children first learn object names. The implications of the results for theories of visual object recognition, the relation of object recognition to category learning, and underlying developmental processes are discussed.  相似文献   

3.
Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected]  相似文献   

4.
本研究采用客体回溯范式考察了特征变化的连续性对维持客体连续表征的作用。实验1和实验2分别探索了形状维度上的变化方式(不变、渐变、突变)和亮度维度上的变化方式(不变、渐变、随机变化)对客体预览利化效应的影响。在特征连续条件下(不变或渐变),两个实验都获得了客体预览利化效应。而在特征不连续变化条件下(突变或随机变化),该效应消失。这些结果说明特征变化的连续性同样影响客体连续表征的维持。  相似文献   

5.
Converging evidence supports a distributed-plus-hub view of semantic processing, in which there are distributed modular semantic sub-systems (e.g., for shape, colour, and action) connected to an amodal semantic hub. Furthermore, object semantic processing of colour and shape, and lexical reading and identification, are processed mainly along the ventral stream, while action semantic processing occurs mainly along the dorsal stream. In Experiment 1, participants read a prime word that required imagining either the object or action referent, and then named a lexical word target. In Experiments 2 and 3, participants performed a lexical decision task (LDT) with the same targets as in Experiment 1, in the presence of foils that were legal nonwords (NW; Experiment 2) or pseudohomophones (PH; Experiment 3). Semantic priming was similar in effect size regardless of prime type for naming, but was greater for object primes than action primes for the LDT with PH foils, suggesting a shared-stream advantage when the task demands focus on orthographic lexical processing. These experiments extend the distributed-plus-hub model, and provide a novel paradigm for further research.  相似文献   

6.
7.
We investigated the role of global (body) and local (parts) motion on the recognition of unfamiliar objects. Participants were trained to categorise moving objects and were then tested on their recognition of static images of these targets using a priming paradigm. Each static target shape was primed by a moving object that comprised either the same body and parts motion; same body, different parts motion; different body, same part motion as the learned target or was non-moving. Only the same body but not the same part motion facilitated shape recognition (Experiment 1), even when either motion was diagnostic of object identity (Experiment 2). When parts motion was more related to the object's body motion then it facilitated the recognition of the static target (Experiment 3). Our results suggest that global and local motions are independently accessed during object recognition and have important implications for how objects are represented in memory.  相似文献   

8.
Adults’ concurrent processing of numerical and action information yields bidirectional interference effects consistent with a cognitive link between these two systems of representation. This link is in place early in life: infants create expectations of congruency across numerical and action-related stimuli (i.e., a small [large] hand aperture associated with a smaller [larger] numerosity). Although these studies point to a developmental continuity of this mapping, little is known about the later development and thus how experience shapes such relationships. We explored how number–action intuitions develop across early and later childhood using the same methodology as in adults. We asked 3-, 6-, and 8-year-old children, as well as adults, to relate the magnitude of an observed action (a static hand shape, open vs. closed, in Experiment 1; a dynamic hand movement, opening vs. closing, in Experiment 2) to either a small or large nonsymbolic quantity (numerosity in Experiment 1 and numerosity and/or object size in Experiment 2). From 6 years of age, children started performing in a systematic congruent way in some conditions, but only 8-year-olds (added in Experiment 2) and adults performed reliably above chance in this task. We provide initial evidence that early intuitions guiding infants’ mapping between magnitude across nonsymbolic number and observed action are used in an explicit way only from late childhood, with a mapping between action and size possibly being the most intuitive. An initial coarse mapping between number and action is likely modulated with extensive experience with grasping and related actions directed to both arrays and individual objects.  相似文献   

9.
Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were able to individuate two natural-looking exemplars from different categories, but not two exemplars from the same category. In Experiment 3, infants failed to individuate two shape-similar exemplars (from Pauen, 2002a) from different categories. However, Experiment 4 revealed that allowing infants to manipulate objects shortly before the individuation task enabled them to individuate shape-similar objects from different categories. In Experiment 5, allowing object manipulation did not induce infants to individuate natural-looking objects from the same category. These findings suggest that object manipulation facilitates kind-based individuation of shape-similar objects by 12-month-olds.  相似文献   

10.
An observer's memory for the final position of a moving object is shifted forward in the direction of that object's motion. It is called representational momentum (RM). This study addressed stimulus-specific effects on RM. In Experiment 1, participants showed larger memory shift for an object moving in its typical direction of motion than when it moved in a nontypical direction of motion. In Experiment 2, participants indicated larger memory shift for a pointed pattern moving in the direction of its point than when it moved in the opposite direction. In Experiment 3, we again examined the influences of knowledge about objects' typical motions and the pointedness of objects, because we did not control the shape (pointedness) of objects in Experiment 1. The results showed that only pointedness affected the magnitude of memory shift and that the effect was smaller than the momentum effect.  相似文献   

11.
In three experiments, we independently manipulated the angular disparity between objects to be compared and the angular distance between the central axis of the objects and the vertical axis in a mental rotation paradigm. There was a linear increase in reaction times that was attributable to both factors. This result held whether the objects were rotated (with respect to each other and to the upright) within the frontal-parallel plane (Experiment 1) or in depth (Experiment 2), although the effects of both factors were greater for objects rotated in depth than for objects rotated within the frontal-parallel plane (Experiment 3). In addition, the factors interacted when the subjects had to search for matching ends of the figures (Experiments 1 and 2), but they were additive when the ends that matched were evident (Experiment 3). These data may be interpreted to mean that subjects normalize or reference an object with respect to the vertical upright as well as compute the rotational transformations used to determine shape identity.  相似文献   

12.
Object knowledge refers to the understanding that all objects share certain properties. Various components of object knowledge (e.g., object occlusion, object causality) have been examined in human infants to determine its developmental origins. Viewpoint invariance--the understanding that an object viewed from different viewpoints is still the same object--is one area of object knowledge, however, that has received less attention. To this end, infants' capacity for viewpoint-invariant perception of multi-part objects was investigated. Three-month-old infants were tested for generalization to an object displayed on a mobile that differed only in orientation (i.e., viewpoint) from a training object. Infants were given experience with a wide range of object views (Experiment 1) or a more restricted range during training (Experiment 2). The results showed that infants generalized between a horizontal and vertical viewpoint (Experiment 1) that they could clearly discriminate between in other contexts (i.e., with restricted view experience, Experiment 2). Overall, the outcome shows that training experience with multiple viewpoints plays an important role in infants' ability to develop a general percept of an object's 3D structure and promotes viewpoint-invariant perception of multi-part objects; in contrast, restricting training experience impedes viewpoint-invariant recognition of multi-part objects.  相似文献   

13.
In three experiments with infants and one with adults we explored the generality, limitations, and informational bases of early form perception. In the infant studies we used a habituation-of-looking-time procedure and the method of Kellman (1984), in which responses to three-dimensional (3-D) form were isolated by habituating 16-week-old subjects to a single object in two different axes of rotation in depth, and testing afterward for dishabituation to the same object and to a different object in a novel axis of rotation. In Experiment 1, continuous optical transformations given by moving 16-week-old observers around a stationary 3-D object specified 3-D form to infants. In Experiment 2 we found no evidence of 3-D form perception from multiple, stationary, binocular views of objects by 16- and 24-week-olds. Experiment 3A indicated that perspective transformations of the bounding contours of an object, apart from surface information, can specify form at 16 weeks. Experiment 3B provided a methodological check, showing that adult subjects could neither perceive 3-D forms from the static views of the objects in Experiment 3A nor match views of either object across different rotations by proximal stimulus similarities. The results identify continuous perspective transformations, given by object or observer movement, as the informational bases of early 3-D form perception. Detecting form in stationary views appears to be a later developmental acquisition.  相似文献   

14.
15.
Intuitive physics: the straight-down belief and its origin   总被引:1,自引:0,他引:1  
This study examines the nature and origin of a common misconception about moving objects. We first show through the use of pencil-and-paper problems that many people erroneously believe that an object that is carried by another moving object (e.g., a ball carried by a walking person) will, if dropped, fall to the ground in a straight vertical line. (In fact, such an object will fall forward in a parabolic arc.) We then demonstrate that this "straight-down belief" turns up not only on pencil-and-paper problems but also on a problem presented in a concrete, dynamic fashion (Experiment 1) and in a situation in which a subject drops a ball while walking (Experiment 2). We next consider the origin of the straight-down belief and propose that the belief may stem from a perceptual illusion. Specifically, we suggest that objects dropped from a moving carrier may be perceived as falling straight down or even backward, when in fact they move forward as they fall. Experiment 3, in which subjects view computer-generated displays simulating situations in which a carried object is dropped, and Experiment 4, in which subjects view a videotape of a walking person dropping an object, provide data consistent with this "seeing is believing" hypothesis.  相似文献   

16.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

17.
The present paper reports two experiments that investigate the critical features of an object shape that automatically elicit recognition. Silhouettes of real objects (targets) and meaningless patterns (fillers) in both canonical and non-canonical formats were presented to subjects, in an attempt to test whether information about the global shape of an object was sufficient for automatic object identification. In Experiment 1, target-filler discriminability was evaluated by means of a reality-decision task. In Experiment 2, subjects had to perform an elongationdecision task, previously shown to be sensitive to the influence of automatically activated object identities (Dell'Acqua & Job, 1998). Contrary to the previous findings, the present study shows that, although silhouettes were identified with surprising good accuracy in the reality-decision task, effects of object identity on the elongation-decision task were negligible.  相似文献   

18.
Recent investigations into how action affects perception have revealed an interesting “action effect”—that is, simply acting upon an object enhances its processing in subsequent tasks. The previous studies, however, relied only on manual responses, allowing an alternative stimulus-response binding account of the effect. The current study examined whether the action effect occurs in the presence of changes in response modalities. In Experiment 1, participants completed a modified action effect paradigm, in which they first produced an arbitrary manual response to a shape and then performed a visual search task in which the previous shape was either a valid or invalid cue—responding with a manual or saccadic response. In line with previous studies, the visual search was faster when the shape was a valid cue but only if the shape had been acted upon. Critically, this action effect emerged similarly in both the manual and ocular response conditions. This cross-modality action effect was successfully replicated in Experiment 2, and analysis of eye movement trajectories further revealed similar action effect patterns on direction and numerosity. These results rule out the stimulus-response binding account of the action effect and suggest that it indeed occurs at an attentional level.  相似文献   

19.
The surface and boundaries of an object generally move in unison, so the motion of a surface could provide information about the motion of its boundaries. Here we report the results of three experiments on spatiotemporal boundary formation that indicate that information about the motion of a surface does influence the formation of its boundaries. In Experiment 1, shape identification at low texture densities was poorer for moving forms in which stationary texture was visible inside than for forms in which the stationary texture was visible only outside. In Experiment 2, the disruption found in Experiment 1 was removed by adding a second external boundary. We hypothesized that the disruption was caused by boundary assignment that perceptually grouped the moving boundary with the static texture. Experiment 3 revealed that accurate information about the motion of the surface facilitated boundary formation only when the motion was seen as coming from the surface of the moving form. Potential mechanisms for surface motion effects in dynamic boundary formation are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号