首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were able to individuate two natural-looking exemplars from different categories, but not two exemplars from the same category. In Experiment 3, infants failed to individuate two shape-similar exemplars (from Pauen, 2002a) from different categories. However, Experiment 4 revealed that allowing infants to manipulate objects shortly before the individuation task enabled them to individuate shape-similar objects from different categories. In Experiment 5, allowing object manipulation did not induce infants to individuate natural-looking objects from the same category. These findings suggest that object manipulation facilitates kind-based individuation of shape-similar objects by 12-month-olds.  相似文献   

2.
An important task of perceptual processing is to parse incoming information into distinct units and to keep track of those units over time as the same, persisting representations. Within the study of visual perception, maintaining such persisting object representations is helped by “object files”—episodic representations that store (and update) information about objects' properties and track objects over time and motion via spatiotemporal information. Although object files are typically discussed as visual, here we demonstrate that object–file correspondence can be computed across sensory modalities. An object file can be initially formed with visual input and later accessed with corresponding auditory information, suggesting that object files may be able to operate at a multimodal level of perceptual processing.  相似文献   

3.
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition.  相似文献   

4.
This study advances the hypothesis that, in the course of object recognition, attention is directed to distinguishing features: visual information that is diagnostic of object identity in a specific context. In five experiments, observers performed an object categorization task involving drawings of fish (Experiments 1–4) and photographs of natural sea animals (Experiment 5). Allocation of attention to distinguishing and non-distinguishing features was examined using primed-matching (Experiment 1) and visual probe (Experiments 2, 4, 5) methods, and manipulated by spatial precuing (Experiment 3). Converging results indicated that in performing the object categorization task, attention was allocated to the distinguishing features in a context-dependent manner, and that such allocation facilitated performance. Based on the view that object recognition, like categorization, is essentially a process of discrimination between probable alternatives, the implications of the findings for the role of attention to distinguishing features in object recognition are discussed.  相似文献   

5.
Harris IM  Dux PE 《Cognition》2005,95(1):73-93
The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation. This failure is usually interpreted as a difficulty in assigning two separate episodic tokens to the same visual type. Thus, RB can provide useful information about which representations are treated as the same by the visual system. Two experiments tested whether RB occurs for repeated objects that were either in identical orientations, or differed by 30, 60, 90, or 180 degrees . Significant RB was found for all orientation differences, consistent with the existence of orientation-invariant object representations. However, under some circumstances, RB was reduced or even eliminated when the repeated object was rotated by 180 degrees , suggesting easier individuation of the repeated objects in this case. A third experiment confirmed that the upside-down orientation is processed more easily than other rotated orientations. The results indicate that, although object identity can be determined independently of orientation, orientation plays an important role in establishing distinct episodic representations of a repeated object, thus enabling one to report them as separate events.  相似文献   

6.
The appearance and disappearance of an object in the visual field is accompanied by changes to multiple visual features at the object's location. When features at a location change asynchronously, the cue of common onset and offset becomes unreliable, with observers tending to report the most recent pairing of features. Here, we use these last feature reports to study the conditions that lead to a new object representation rather than an update to an existing representation. Experiments 1 and 2 establish that last feature reports predominate in asynchronous displays when feature durations are brief. Experiments 3 and 4 demonstrate that these reports also are critically influenced by whether features can be grouped using nontemporal cues such as common shape or location. The results are interpreted within the object-updating framework (Enns, Lleras, & Moore, 2010), which proposes that human vision is biased to represent a rapid image sequence as one or more objects changing over time.  相似文献   

7.
This study contrasted the role of surfaces and volumetric shape primitives in three-dimensional object recognition. Observers (N?=?50) matched subsets of closed contour fragments, surfaces, or volumetric parts to whole novel objects during a whole–part matching task. Three factors were further manipulated: part viewpoint (either same or different between component parts and whole objects), surface occlusion (comparison parts contained either visible surfaces only, or a surface that was fully or partially occluded in the whole object), and target–distractor similarity. Similarity was varied in terms of systematic variation in nonaccidental (NAP) or metric (MP) properties of individual parts. Analysis of sensitivity (d′) showed a whole–part matching advantage for surface-based parts and volumes over closed contour fragments—but no benefit for volumetric parts over surfaces. We also found a performance cost in matching volumetric parts to wholes when the volumes showed surfaces that were occluded in the whole object. The same pattern was found for both same and different viewpoints, and regardless of target–distractor similarity. These findings challenge models in which recognition is mediated by volumetric part-based shape representations. Instead, we argue that the results are consistent with a surface-based model of high-level shape representation for recognition.  相似文献   

8.
Mendes N  Rakoczy H  Call J 《Cognition》2008,106(2):730-749
Developmental research suggests that whereas very young infants individuate objects purely on spatiotemporal grounds, from (at latest) around 1 year of age children are capable of individuating objects according to the kind they belong to and the properties they instantiate. As the latter ability has been found to correlate with language, some have speculated whether it might be essentially language dependent and therefore uniquely human. Existing studies with non-human primates seem to speak against this hypothesis, but fail to present conclusive evidence due to methodological shortcomings. In the present experiments we set out to test non-linguistic object individuation in three great ape species with a refined manual search methodology. Experiment 1 tested for spatiotemporal object individuation: Subjects saw 1 or 2 objects simultaneously being placed inside a box in which they could reach, and then in both conditions only found 1 object. After retrieval of the 1 object, subjects reached again significantly more often when they had seen 2 than when they had seen 1 object. Experiment 2 tested for object individuation according to property/kind information only: Subjects saw 1 object being placed inside the box, and then either found that object (expected) or an object of a different kind (unexpected). Analogously to Experiment 1, after retrieval of the 1 object, subjects reached again significantly more often in the unexpected than in the expected condition. These results thus confirm previous findings suggesting that individuating objects according to their property/kind is neither uniquely human nor essentially language dependent. It remains to be seen, however, whether this kind of object individuation requires sortal concepts as human linguistic thinkers use them, or whether some simpler form of tracking properties is sufficient.  相似文献   

9.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

10.
Human beings can effortlessly perceive stimuli through their sensory systems to learn, understand, recognize and act on our environment or context. Over the years, efforts have been made to enable cybernetic entities to be close to performing human perception tasks; and in general, to bring artificial intelligence closer to human intelligence.Neuroscience and other cognitive sciences provide evidence and explanations of the functioning of certain aspects of visual perception in the human brain. Visual perception is a complex process, and its has been divided into several parts. Object classification is one of those parts; it is necessary for carrying out the declarative interpretation of the environment. This article deals with the object classification problem.In this article, we propose a computational model of visual classification of objects based on neuroscience, it consists of two modular systems: a visual processing system, in charge of the extraction of characteristics; and a perception sub-system, which performs the classification of objects based on the features extracted by the visual processing system.With the results obtained, a set of aspects are analyzed using similarity and dissimilarity matrices. Also based on the neuroscientific evidence and the results obtained from this research, some aspects are suggested for consideration to improve the work in the future and bring us closer to performing the task of visual classification as humans do.  相似文献   

11.
There is evidence for developmental hierarchies in the type of information to which infants attend when reasoning about objects. Investigators have questioned the origin of these hierarchies and how infants come to identify new sources of information when reasoning about objects. The goal of the present experiments was to shed light on this debate by identifying conditions under which infants’ sensitivity to color information, which is slow to emerge, could be enhanced in an object individuation task. The outcome of Experiment 1 confirmed and extended previous reports that 9.5-month-olds can be primed, through exposure to events in which the color of an object predicts its function, to attend to color differences in a subsequent individuation task. The outcomes of Experiments 2-4 revealed age-related changes in the nature of the representations that support color priming. This is exemplified by three main findings. First, the representations that are formed during the color-function events are relatively specific. That is, infants are primed to use the color difference seen in the color-function events to individuate objects in the test events, but not other color differences. Second, 9.5-month-olds can be led to form more abstract event representations, and then generalize to other colors in the test events if they are shown multiple pairs of colors in the color-function events. Third, slightly younger 9-month-olds also can be led to form more inclusive categories with multiple color pairs, but only when they are allowed to directly compare the exemplars in each color pair during the present events. These results shed light on the development of categorization abilities, cognitive mechanisms that support color-function priming, and the kinds of experiences that can increase infants’ sensitivity to color information.  相似文献   

12.
We investigated the influence of size on identification, priming, and explicit memory for color photos of common objects. Participants studied objects displayed in small, medium, and large sizes and memory was assessed with both implicit identification and explicit recognition tests. Overall, large objects were easier to identify than small objects and study-to-test changes in object size impeded performance on explicit but not implicit memory tests. In contrast to previous findings with line-drawings of objects but consistent with predictions from the distance-as-filtering hypothesis, we found that study-test size manipulations had large effects on old/new recognition memory test for objects displayed in large size at test but not for objects displayed small or medium at test. Our findings add to the growing body of literature showing that the findings obtained using line-drawings of objects do not necessarily generalize to color photos of common objects. We discuss implications of our findings for theories of object perception, memory, and eyewitness identification accuracy for objects.  相似文献   

13.
Infants’ ability to accurately represent and later recognize previously viewed objects, and conversely, to discriminate novel objects from those previously seen improves remarkably over the first two years of life. During this time, infants acquire extensive experience viewing and manipulating objects and these experiences influence their physical reasoning. Here we posited that infants’ observations of object feature stability (rigid versus malleable) can influence the use of those features to individuate two successively viewed objects. We showed 8.5-month-olds a series of objects that could or could not change shape, then assessed their use of shape as a basis for object individuation. Infants who explored rigid objects later used shape differences to individuate objects; however, infants who explored malleable objects did not. This outcome suggests that the latter infants did not take into account shape differences during the physical reasoning task and provides further evidence that infants’ attention to object features can be readily modified based on recent experiences.  相似文献   

14.
Nontemporal information processing involving short-term memory requirements disturbs time estimation. Previous studies mostly used letters or digits, which are maintained in working memory by phonological loops. Since verbal and nonverbal information are processed by separate working-memory subsystems, how do nonverbal, object-based memory tasks affect time estimation? We manipulated visual object memory load using the magic cube materials. Participants were divided into three groups, who completed a reaction-time task (control task), a memory-recognition task interposed by an attempt to produce a 2500-ms time interval (active processing), and a memory-recognition task following time interval production (passive retention). The produced time increased with increasing memory-object size under both the active processing and passive retention conditions; mean produced time interval did not significantly differ between the two experimental conditions. By comparing the reaction times and error rates of a relevant task, we excluded any speed–accuracy tradeoff during timing. This result suggests that when the working-memory information to be processed includes objects requiring attention for retention, the production of time intervals is also affected by memory item maintenance.  相似文献   

15.
16.
Four experiments investigated whether 12-month-old infants use perceptual property information in a complex object individuation task, using the violation-of-expectancy looking time method (Xu, 2002; Xu & Carey, 1996). Infants were shown two objects with different properties emerge and return behind an occluder, one at a time. The occluder was then removed, revealing either two objects (expected outcome, if property differences support individuation) or one object (unexpected outcome). In Experiments 1-3, infants failed to use color, size, or a combination of color, size, and pattern differences to establish a representation of two distinct objects behind an occluder. In Experiment 4, infants succeeded in using cross-basic-level-kind shape differences to establish a representation of two objects but failed to do so using within-basic-level-kind shape differences. Control conditions found that the methods were sensitive. Infants succeeded when provided unambiguous spatiotemporal information for two objects, and they encoded the property differences during these experiments. These findings suggest that by 12 months, different properties play different roles in a complex object individuation task. Certain salient shape differences enter into the computation of numerical distinctness of objects before other property differences such as color or size. Since shape differences are often correlated with object kind differences, these results converge with others in the literature that suggest that by the end of the first year of life, infants' representational systems begin to distinguish kinds and properties.  相似文献   

17.
This study describes infants’ behaviors with objects in relation to age, body position, and object properties. Object behaviors were assessed longitudinally in 22 healthy infants supine, prone, and sitting from birth through 2 years. Results reveal: (1) infants learn to become intense and sophisticated explorers within the first 6 months of life; (2) young infants dynamically and rapidly shift among a variety of behavioral combinations to gather information; (3) behaviors on objects develop along different trajectories so that behavioral profiles vary across time; (4) object behaviors are generally similar in supine and sitting but diminished in prone; and (5) infants begin matching certain behaviors to object properties as newborns. These data demonstrate how infants learn to match their emerging behaviors with changing positional constraints and object affordances.  相似文献   

18.
4.5-month-old infants can use information learned from prior experience with objects to help determine the boundaries of objects in a complex visual scene (Needham, 1998; Needham, Dueker, & Lockhead, 2002). The present studies investigate the effect of delay (between prior experience and test) on infant use of such experiential knowledge. Results indicate that infants can use experience with an object to help them to parse a scene containing that object 24 (Experiment 1). Experiment 2 suggests that after 24 h infants have begun to forget some object attributes, and that this forgetting promotes generalization from one similar object to another. After a 72-h delay, infants did not show any beneficial effect of prior experience with one of the objects in the scene (Experiments 3A and B). However, prior experience with multiple objects, similar to an object in the scene, facilitated infant segregation of the scene 72 h later, suggesting that category information remains available in infant memory longer than experience with a single object. The results are discussed in terms of optimal infant benefit from prior experiences with objects.  相似文献   

19.
This research examined whether 4-month-old infants use a discontinuity in an object's front surface to visually segregate a display into two separate objects, and whether object shape enables its use. In Experiment 1, infants saw a three-dimensional display composed of two parts with distinctly different shapes. Two groups of infants saw a display in which these two shapes were divided by a visible discontinuity in the front surface (i.e., a boundary between the two objects). One of these groups saw the display move apart at the discontinuity when a gloved hand pulled one object; the second group saw the two objects move together as a single unit. A third group saw a modified version of this display that had no discontinuity present. The results suggested that infants regarded the discontinuity as an indication that the display could be composed of more than one object. In Experiment 2, infants saw the same display, but with a shape that did not highlight the discontinuity. The infants in this study showed no evidence of using the discontinuity. Together, the findings suggest that 4-month-old infants use the surface discontinuity between two objects as an indication that multiple objects could be present in a display, but only when scanning the outer edges of the display leads them to attend to it.  相似文献   

20.
Cotton top tamarins were tested in visible and invisible displacement tasks in a method similar to that used elsewhere to test squirrel monkeys and orangutans. All subjects performed at levels significantly above chance on visible (n=8) and invisible (n=7) displacements, wherein the tasks included tests of the perseverance error, tests of memory in double and triple displacements, and "catch" trials that tested for the use of the experimenter's hand as a cue for the correct cup. Performance on all nine tasks was significantly higher than chance level selection of cups, and tasks using visible displacements generated more accurate performance than tasks using invisible displacements. Performance was not accounted for by a practice effect based on exposure to successive tasks. Results suggest that tamarins possess stage 6 object permanence capabilities, and that in a situation involving brief exposure to tasks and foraging opportunities, tracking objects' movements and responding more flexibly are abilities expressed readily by the tamarins. Electronic Publication  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号