首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Can object names and functions act as cues to categories for infants? In Study 1, 14- and 18-month-old infants were shown novel category exemplars along with a function, a name, or no cues. Infants were then asked to "find another one," choosing between 2 novel objects (1 from the familiar category and the other not). Infants at both ages were more likely to select the category match in the function than in the no-cue condition. However, only at 18 months did naming the objects enhance categorization. Study 2 shows that names can facilitate categorization for 14-month-olds as well when a hint regarding the core meaning of the objects (the function of a single familiarization object) is provided.  相似文献   

2.
Three experiments are reported on the development of object categorization skills during the second year of life. Experiment 1 examined whether 14- and 18-month-old infants were capable of performing categorization at the animate/inanimate (A/I) level using a sequential touching task. The 18-month-olds were significantly above chance and the 14-month-olds were also approaching above-chance significance, which is the highest level of inclusiveness ever tested in infancy. In Experiments 2 and 3, 14-month-old infants participated in a sequential touching task in which the part features of animate and inanimate objects were modified, allowing for a test of partonomic (i.e., legs and wheels) vs. taxonomic (i.e., animates and inanimates) categorization. Infants did not favor partonomic categorization, suggesting that A/I categories are not formed solely based on object parts such as legs and wheels.  相似文献   

3.
In three experiments, we examined 17-month-olds' acquisition of novel symbols (words and gestures) as names for object categories. Experiment 1 compares infants' extension of novel symbols when they are presented within a familiar naming phrase (e.g., "Look at this [symbol]!") versus presented alone (e.g., "Look! ... [symbol]!") Infants mapped novel gestures successfully in both naming contexts. However, infants mapped novel words only within the context of familiar naming phrases. Thus, although infants can learn both words and gestures, they have divergent expectations about the circumstances under which the 2 symbolic forms name objects. Experiments 2 and 3 test the hypothesis that infants' expectations about the circumstances under which words that name objects are acquired by monitoring how adults indicate their intention to name. By employing a training paradigm, these two experiments demonstrated that infants can infer how an experimenter signals his or her intention to name an object on the basis of a very brief training experience.  相似文献   

4.
Children often extend names to novel artifacts on the basis of overall shape rather than core properties (e.g., function). This bias is claimed to reflect the fact that nonrandom structure is a reliable cue to an object having a specific designed function. In this article, we show that information about an object's design (i.e., about its creator's intentions) is neither necessary nor sufficient for children to override the shape bias. Children extend names on the basis of any information specifying the artifact's function (e.g., information about design, current use, or possible use), especially when this information is made salient when candidate objects for extension are introduced. Possible mechanisms via which children come to rely less on easily observable cues (e.g., shape) and more on core properties (e.g., function) are discussed.  相似文献   

5.
6.
Two experiments addressed the influence of secondary task performance at encoding on recall of different features of subject-performed tasks (SPTs) involving objects (e.g., turn the wallet). In Experiment 1, memory for verbs and colors of objects was assessed, with object names serving as cues. In Experiment 2, object and color memory were assessed, with verbs serving as cues. Results from both experiments indicated a greater deterioration of memory performance under divided attention for verbal features than for colors. In addition, intention to remember did not affect performance for any feature in either experiment. The overall pattern of outcome is discussed relative to the view that encoding of verbal features of SPTs is more attention-demanding than encoding of physical task features, such as color.  相似文献   

7.
Objects can control the focus of attention, allowing features on the same object to be selected more easily than features on different objects. In the present experiments, we investigated the perceptual processes that contribute to such object-based attentional effects. Previous research has demonstrated that object-based effects occur for single-region objects but not for multiple-region objects under some conditions (Experiment 1, Watson & Kramer, 1999). Such results are surprising, because most objects in natural scenes are composed of multiple regions. Previous findings could therefore limit the usefulness of an object-based selection mechanism. We explored the generality of these single-region selection results by manipulating the extent to which different (i.e., multiple) regions of a single object perceptually grouped together. Object-based attentional effects were attenuated when multiple regions did not group into a single perceptual object (Experiment 1). However, when multiple regions grouped together based on (1) edge continuation (Experiments 2 and 3) or (2) part and occlusion cues (Experiment 4), we observed object-based effects. Our results suggest that object-based attention is a robust process that can select multiple-region objects, provided the regions of such objects cohere on the basis of perceptual grouping cues.  相似文献   

8.
Abstract categories (i.e., groups of objects that do not share perceptual features, such as food) abound in everyday situations. The present looking time study investigated whether infants are able to distinguish between two abstract categories (food and toys), and how this ability may extend beyond perceived information by manipulating object familiarity in several ways. Test trials displayed 1) the exact familiarized objects paired as they were during familiarization, 2) a cross-pairing of these same familiar objects, 3) novel objects in the same category as the familiarized items, or 4) novel objects in a different category. Compared to the most familiar test trial (i.e., Familiar Category, Familiar Objects, Familiar Pairings), infants looked longer to all other test trials. Although there was a linear increase in looking time with increased novelty of the test trials (i.e., Novel Category as the most novel test trial), the looking times did not differ significantly between the Novel Category and Familiar Category, Unfamiliar Objects trials. This study contributes to our understanding of how infants form object categories based on object familiarity, object co-occurrence, and information abstraction.  相似文献   

9.
Although working memory has a highly constrained capacity limit of three or four items, both adults and toddlers can increase the total amount of stored information by "chunking" object representations in memory. To examine the developmental origins of chunking, we used a violation-of-expectation procedure to ask whether 7-month-old infants, whose working memory capacity is still maturing, also can chunk items in memory. In Experiment 1, we found that in the absence of chunking cues, infants failed to remember three identical hidden objects. In Experiments 2 and 3, we found that infants successfully remembered three hidden objects when provided with overlapping spatial and featural chunking cues. In Experiment 4, we found that infants did not chunk when provided with either spatial or featural chunking cues alone. Finally, in Experiment 5, we found that infants also failed to chunk when spatial and featural cues specified different chunks (i.e., were pitted against each other). Taken together, these results suggest that chunking is available before working memory capacity has matured but still may undergo important development over the first year of life.  相似文献   

10.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

11.
We investigated the effects of semantic priming on initial encoding of briefly presented pictures of objects and scenes. Pictures in four experiments were presented for varying durations and were followed immediately by a mask. In Experiments 1 and 2, pictures of simple objects were either preceded or not preceded by the object's category name (e.g., dog). In Experiment 1 we measured immediate object identification; in Experiment 2 we measured delayed old/new recognition in which targets and distractors were from the same categories. In Experiment 3 naturalistic scenes were either preceded or not preceded by the scene's category name (e.g., supermarket). We measured delayed recognition in which targets and distractors were described by the same category names. In Experiments 1-3, performance was better for primed than for unprimed pictures. Experiment 4 was similar to Experiment 2 in that we measured delayed recognition for simple objects. As in Experiments 1-3, a prime that preceded the object improved subsequent memory performance for the object. However, a prime that followed the object did not affect subsequent performance. Together, these results imply that priming leads to more efficient information acquisition. We offer a picture-processing model that accounts for these results. The model's central assumption is that knowledge of a picture's category (gist) increases the rate at which visual information is acquired from the picture.  相似文献   

12.
We investigated the ability of people to retrieve information about objects as they moved through rooms in a virtual space. People were probed withobject names that were either associated withthe person (i.e., carried) or dissociated from the person (i.e., just set down). Also, people either did or did not shift spatial regions (i.e., go to a new room). Information about objects was less accessible when the objects were dissociated from the person. Furthermore, information about an object was also less available when there was a spatial shift. However, the spatial shift had a larger effect on memory for the currently associated object. These data are interpreted as being more supportive of a situation model explanation, following on work using narratives and film. Simpler memory-based accounts that do not take into account the context in which a person is embedded cannot adequately account for the results.  相似文献   

13.
Self‐propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience‐dependent or is already present at birth is unknown. Here, we tested newborns to examine whether predispositions to process self‐produced motion cues underlying animacy perception were present soon after birth. We systematically manipulated the onset of motion by self‐propulsion (Experiment 1) and the change in trajectory direction in the presence or absence of direct contact with an external object (Experiments 2 and 3) to investigate how these motion cues determine preference in newborns. Overall, data demonstrated that, at least at birth, the self‐propelled onset of motion is a crucial visual cue that allowed newborns to differentiate between self‐ and non‐self‐propelled objects (Experiment 1) because when this cue was removed, newborns did not manifest any visual preference (Experiment 2), even if they were able to discriminate between the stimuli (Experiment 3). To our knowledge, this is the first study aimed at identifying sensitivity in human newborns to the most basic and rudimentary motion cues that reliably trigger perceptions of animacy in adults. Our findings are compatible with the hypothesis of the existence of inborn predispositions to visual cues of motion that trigger animacy perception in adults.  相似文献   

14.
One prestudy based on a corpus analysis and four experiments in which participants had to invent novel names for persons or objects (N?=?336 participants in total) investigated how the valence of a face or an object affects the phonological characteristics of the respective novel name. Based on the articulatory feedback hypothesis, we predicted that /i:/ is included more frequently in fictional names for faces or objects with a positive valence than for those with a negative valence. For /o:/, the pattern should reverse. An analysis of the Berlin Affective Word List – Reloaded (BAWL-R) yielded a higher number of occurrences of /o:/ in German words with negative valence than in words with positive valence; with /i:/ the situation is less clear. In Experiments 1 and 2, participants named persons showing a positive or a negative facial expression. Names for smiling persons included more /i:/s and fewer /o:/s than names for persons with a negative facial expression. In Experiments 3 and 4, participants heard a Swahili narration and invented pseudo-Swahili names for objects with positive, neutral, or negative valence. Names for positive objects included more /i:/s than names for neutral or negative objects, and names for negative objects included more /o:/s than names for neutral or positive objects. These finding indicate a stable vowel-emotion link.  相似文献   

15.
Waxman SR  Braun I 《Cognition》2005,95(3):B59-B68
Recent research documents that for infants just beginning to produce words on their own, novel words highlight commonalities among named objects and, in this way, serve as invitations to form categories. The current experiment identifies more precisely the source of this invitation. We asked whether applying a consistent name to a set of distinct objects is crucial to categorization, or whether variable names might serve the same conceptual function. The evidence suggests that for 12-month-old infants, consistency in naming is critical. Infants hearing a single consistent novel noun for a set of distinct objects successfully formed object categories. Infants hearing different novel nouns for the same set of objects did not. These results lend strength and greater precision to the argument that naming has powerful and rather nuanced conceptual consequences for infants as well as for mature speakers.  相似文献   

16.
In laboratory experiments, infants are sensitive to patterns of visual features that co-occur (e.g., Fiser & Aslin, 2002). Once infants learn the statistical regularities, however, what do they do with that knowledge? Moreover, which patterns do infants learn in the cluttered world outside of the laboratory? Across 4 experiments, we show that 9-month-olds use this sensitivity to make inferences about object properties. In Experiment 1, 9-month-old infants expected co-occurring visual features to remain fused (i.e., infants looked longer when co-occurring features split apart than when they stayed together). Forming such expectations can help identify integral object parts for object individuation, recognition, and categorization. In Experiment 2, we increased the task difficulty by presenting the test stimuli simultaneously with a different spatial layout from the familiarization trials to provide a more ecologically valid condition. Infants did not make similar inferences in this more distracting test condition. However, Experiment 3 showed that a social cue did allow inferences in this more difficult test condition, and Experiment 4 showed that social cues helped infants choose patterns among distractor patterns during learning as well as during test. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in distraction-filled environments.  相似文献   

17.
Words from different grammatical categories (e.g., nouns and adjectives) highlight different aspects of the same objects (e.g., object categories and object properties). Two experiments examine the acquisition of this phenomenon in 14-month-olds, asking whether infants can construe the very same set of objects (e.g., four purple animals) either as members of an object category (e.g., animals) or as embodying a salient object property (e.g., four purple things) and whether naming (with either count nouns or adjectives) influences infants' construals. Results suggest (1) that infants have begun to distinguish count nouns from adjectives, (2) that infants share with mature language-users an expectation that different grammatical forms highlight different aspects, and (3) that infants recruit these expectations when extending novel words. Further, these results suggest that an expectation linking count nouns to object categories emerges early in acquisition and supports the emergence of other word-to-world mappings.  相似文献   

18.
Considerable evidence indicates that shape similarity plays a major role in object recognition, identification and categorization. However, little is known about shape processing and its development. Across four experiments, we addressed two related questions. First, what makes objects similar in shape? Second, how does the processing of shape similarity develop? We specifically asked whether children and adults determine shape similarity by using categories (e.g., straight vs. curved), as proposed by Biederman (1987), or whether they treat all shape variability uniformly, as proposed by Ullman (1998). Findings from Experiments 1 and 2 suggest that adults and 7-year-olds generally engage in a process in which they impose categories on shape variation and judge objects that fall within those categories as being similar in shape. Four-year-olds are far less likely to engage in such a process. Experiments 3 and 4 address whether 4-year-olds are more likely to treat shape similarity categorically (as older children and adults do) when the objects are given familiar names, functions, and internal properties. Naming did lead to more advanced treatment of shape similarity in some cases. Overall, these findings provide evidence of developmental differences in shape processing and suggest that knowledge of abstract properties of objects may affect the calculation of shape similarity.  相似文献   

19.
This paper investigates the role of static and dynamic attributes for the animate-inanimate distinction in category-based reasoning of 7-month-olds. Three experiments tested infants’ responses to movement events involving an unfamiliar animal and a ball. When either the animal or the ball showed self-initiated irregular movements (Experiment 1), infants expected the previously active object to start moving again. When both objects were moving together in an ambiguous motion event (Experiment 2), infants expected only the animal to start moving again. Initial looking preferences for each object did not influence results. When either the facial features of the animal were removed, or its furry body was replaced by a plastic spiral in an ambiguous motion event (Experiment 3), infants formed no clear expectation regarding future movements. Based on this set of findings we conclude that 7-month-olds flexibly combine information about the static and dynamic properties of objects in order to reason about motion events.  相似文献   

20.
Three experiments investigated (a) the development of infants' use of features to find a boundary between 2 adjacent objects and (b) the possible connection between this ability and the development of object exploration skills. In Experiments 1 and 2, it was shown that 3 1/2-month-old infants are beginning to use object features to determine the composition of a display, interpreting a display composed of different-looking parts as 2 separate objects and a display of similar-looking parts as a single object. In Experiment 3, exploration and segregation abilities were assessed in the same infants. The results of this study were that the more actively exploring infants perceived the display used in Experiment 1 as 2 separate objects, whereas the less actively exploring infants did not. One hypothesis consistent with these findings is that infants may learn how object features can be used to find object boundaries as a result of new observations made possible by their more active exploration skills.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号