首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
From infancy, we recognize that labels denote category membership and help us to identify the critical features that objects within a category share. Labels not only reflect how we categorize, but also allow us to communicate and share categories with others. Given the special status of labels as markers of category membership, do novel labels (i.e., non‐words) affect the way in which adults select dimensions for categorization in unsupervised settings? Additionally, is the purpose of this effect primarily coordinative (i.e., do labels promote shared understanding of how we categorize objects)? To address this, we conducted two experiments in which participants individually categorized images of mountains with or without novel labels, and with or without a goal of coordination, within a non‐communicative paradigm. People who sorted items with novel labels had more similar categories than people who sorted without labels only when they were told that their categories should make sense to other people, and not otherwise. We argue that sorters' goals determine whether novel labels promote the development of socially coherent categories.  相似文献   

2.
Labels can override perceptual categories in early infancy   总被引:1,自引:0,他引:1  
Plunkett K  Hu JF  Cohen LB 《Cognition》2008,106(2):665-681
An extensive body of research claims that labels facilitate categorisation, highlight the commonalities between objects and act as invitations to form categories for young infants before their first birthday. While this may indeed be a reasonable claim, we argue that it is not justified by the experiments described in the research. We report on a series of experiments that demonstrate that labels can play a causal role in category formation during infancy. Ten-month-old infants were taught to group computer-displayed, novel cartoon drawings into two categories under tightly controlled experimental conditions. Infants were given the opportunity to learn the two categories under four conditions: Without any labels, with two labels that correlated with category membership, with two labels assigned randomly to objects, and with one label assigned to all objects. Category formation was assessed identically in all conditions using a novelty preference procedure conducted in the absence of any labels. The labelling condition had a decisive impact on the way infants formed categories: When two labels correlated with the visual category information, infants learned two categories, just as if there had been no labels presented. However, uncorrelated labels completely disrupted the formation of any categories. Finally, consistent use of a single label across objects led infants to learn one broad category that included all the objects. These findings demonstrate that even before infants start to produce their first words, the labels they hear can override the manner in which they categorise objects.  相似文献   

3.
We address the problem of predicting how people will spontaneously divide into groups a set of novel items. This is a process akin to perceptual organization. We therefore employ the simplicity principle from perceptual organization to propose a simplicity model of unconstrained spontaneous grouping. The simplicity model predicts that people would prefer the categories for a set of novel items that provide the simplest encoding of these items. Classification predictions are derived from the model without information either about the number of categories sought or information about the distributional properties of the objects to be classified. These features of the simplicity model distinguish it from other models in unsupervised categorization (where, for example, the number of categories sought is determined via a free parameter), and we discuss how these computational differences are related to differences in modeling objectives. The predictions of the simplicity model are validated in four experiments. We also discuss the significance of simplicity in cognitive modeling more generally.  相似文献   

4.
类别学习是人类对不同类别加以归类的过程。类别信息的表征、分类策略运用的特点一直是类别学习研究的重点。非监控类别学习可分为直接的非监控类别学习和间接的非监控类别学习。直接的非监控类别学习(非限制任务, 限制任务)中被试的分类策略具有分类“单维度倾向”策略特点,类别变异程度会影响类别表征; 间接的非监控类别学习更倾向形成相似性表征, 直接的非监控类别学习则为基于规则表征。现有的非监控类别学习的理论对分类策略和表征的解释仍显薄弱, 不同学习任务下类别迁移和知识效应的研究还存在不足, 未来研究还需要进一步验证知识效应对非监控类别学习的认知加工过程的影响、探索影响类别表征形成的因素等问题。  相似文献   

5.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.  相似文献   

6.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

7.
In 3 experiments, the authors provide evidence for a distinct category-invention process in unsupervised (discovery) learning and set forth a method for observing and investigating that process. In the 1st 2 experiments, the sequencing of unlabeled training instances strongly affected participants' ability to discover patterns (categories) across those instances. In the 3rd experiment, providing diagnostic labels helped participants discover categories and improved learning even for instance sequences that were unlearnable in the earlier experiments. These results are incompatible with models that assume that people learn by incrementally tracking correlations between individual features; instead, they suggest that learners in this study used expectation failure as a trigger to invent distinct categories to represent patterns in the stimuli. The results are explained in terms of J. R. Anderson's (1990, 1991) rational model of categorization, and extensions of this analysis for real-world learning are discussed.  相似文献   

8.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

9.
The shape bias, a preference for mapping new word labels onto the shape rather than the color or texture of referents, has been postulated as a word‐learning mechanism. Previous research has shown deficits in the shape bias in children with autism even though they acquire sizeable lexicons. While previous explanations have suggested the atypical use of color for label extension in individuals with autism, we hypothesize an atypical mapping of novel labels to novel objects, regardless of the physical properties of the objects. In Experiment 1, we demonstrate this phenomenon in some individuals with autism, but the novelty of objects only partially explains their lack of shape bias. In a second experiment, we present a computational model that provides a developmental account of the shape bias in typically developing children and in those with autism. This model is based on theories of neurological dysfunctions in autism, and it integrates theoretical and empirical findings in the literature of categorization, word learning, and the shape bias. The model replicates the pattern of results of our first experiment and shows how individuals with autism are more likely to categorize experimental objects together on the basis of their novelty. It also provides insights into possible mechanisms by which children with autism learn new words, and why their word referents may be idiosyncratic. Our model highlights a developmental approach to autism that emphasizes deficient representations of categories underlying an impaired shape bias.  相似文献   

10.
Blinded by the accent! The minor role of looks in ethnic categorization   总被引:1,自引:0,他引:1  
The categories that social targets belong to are often activated automatically. Most studies investigating social categorization have used visual stimuli or verbal labels, whereas ethnolinguistic identity theory posits that language is an essential dimension of ethnic identity. Language should therefore be used for social categorization. In 2 experiments, using the "Who Said What?" paradigm, the authors investigated social categorization by using accents (auditory stimuli) and looks (visual stimuli) to indicate ethnicity, either separately or in combination. Given either looks or accents only, the authors demonstrated that ethnic categorization can be based on accents, and the authors found a similar degree of ethnic categorization by accents and looks. When ethnic cues of looks and accents were combined by creating cross categories, there was a clear predominance of accents as meaningful cues for categorization, as shown in the respective parameters of a multinomial model. The present findings are discussed with regard to the generalizability of findings using one channel of presentation (e.g., visual) and the asymmetry found with different presentation channels for the category ethnicity.  相似文献   

11.
How do we learn to recognize visual categories, such as dogs and cats? Somehow, the brain uses limited variable examples to extract the essential characteristics of new visual categories. Here, I describe an approach to category learning and recognition that is based on recent computational advances. In this approach, objects are represented by a hierarchy of fragments that are extracted during learning from observed examples. The fragments are class-specific features and are selected to deliver a high amount of information for categorization. The same fragments hierarchy is then used for general categorization, individual object recognition and object-parts identification. Recognition is also combined with object segmentation, using stored fragments, to provide a top-down process that delineates object boundaries in complex cluttered scenes. The approach is computationally effective and provides a possible framework for categorization, recognition and segmentation in human vision.  相似文献   

12.
Most theories of categorization posit feature-based representations. Markman and Stilwell (2001) argued that many natural categories name roles in relational systems and therefore they are role-governed categories. There is little extant empirical evidence to support the existence of role-governed categories. Three experiments examine predictions for ways that role-governed categories should differ from feature-based categories. Experiment 1 shows that our knowledge of role-governed categories, in contrast to feature-based categories, is largely about properties extrinsic to category members. Experiment 2 shows that role-governed categories have more prominent ideals than feature-based categories. Experiment 3 demonstrates that novel role-governed categories are licensed by the instantiation of novel relational structures. We then discuss broader implications for the study of categories and concepts.  相似文献   

13.
14.
When searching for information in menu-based retrieval systems, users are given a choice among a set of categories. Successful retrieval of information depends critically on the user’s understanding the system’s division of objects into categories and the system’s labels for these categories. This research compares several different ways of describing ill-defined categories of objects using combinations of names and examples. Examples provide a promising possibility, both as a means of flexibly naming new or difficult menu categories and as a methodological tool for studying certain categorization problems.  相似文献   

15.
Most natural domains can be represented in multiple ways: we can categorize foods in terms of their nutritional content or social role, animals in terms of their taxonomic groupings or their ecological niches, and musical instruments in terms of their taxonomic categories or social uses. Previous approaches to modeling human categorization have largely ignored the problem of cross-categorization, focusing on learning just a single system of categories that explains all of the features. Cross-categorization presents a difficult problem: how can we infer categories without first knowing which features the categories are meant to explain? We present a novel model that suggests that human cross-categorization is a result of joint inference about multiple systems of categories and the features that they explain. We also formalize two commonly proposed alternative explanations for cross-categorization behavior: a features-first and an objects-first approach. The features-first approach suggests that cross-categorization is a consequence of attentional processes, where features are selected by an attentional mechanism first and categories are derived second. The objects-first approach suggests that cross-categorization is a consequence of repeated, sequential attempts to explain features, where categories are derived first, then features that are poorly explained are recategorized. We present two sets of simulations and experiments testing the models’ predictions about human categorization. We find that an approach based on joint inference provides the best fit to human categorization behavior, and we suggest that a full account of human category learning will need to incorporate something akin to these capabilities.  相似文献   

16.
Research has shown that observers store surprisingly highly detailed long-term memory representations of visual objects after only a single viewing. However, the nature of these representations is currently not well understood. In particular, it may be that the nature of such memory representations is not unitary but reflects the flexible operating of two separate memory subsystems: a feature-based subsystem that stores visual experiences in the form of independent features, and an object-based subsystem that stores visual experiences in the form of coherent objects. Such an assumption is usually difficult to test, because overt memory responses reflect the joint output of both systems. Therefore, to disentangle the two systems, we (1) manipulated the affective state of observers (negative vs. positive) during initial object perception, to introduce systematic variance in the way that visual experiences are stored, and (2) measured both the electrophysiological activity at encoding (via electroencephalography) and later feature memory performance for the objects. The results showed that the nature of stored memory representations varied qualitatively as a function of affective state. Negative affect promoted the independent storage of object features, driven by preattentive brain activities (feature-based memory representations), whereas positive affect promoted the dependent storage of object features, driven by attention-related brain activities (object-based memory representations). Taken together, these findings suggest that visual long-term memory is not a unitary phenomenon. Instead, incoming information can be stored flexibly by means of two qualitatively different long-term memory subsystems, based on the requirements of the current situation.  相似文献   

17.
There is growing evidence that individuation experience is necessary for development of expert object discrimination that transfers to new exemplars. Individuation training in human studies has primarily used label association tasks where labels are learned at both the individual and more abstract (basic) level, and expertise criterion requires that individual-level judgments become as fast as basic-level judgments. However, there are training situations when the use of labels is not practical (e.g., with animals or some clinical populations). Moreover, labeling itself can facilitate object discrimination, thus it is unclear what role labels play in the acquisition of expertise in such training paradigms. Here, participants completed an online game that did not require labels in which they interacted with novel objects (Greebles) or control objects (Yufos). Games either required individuation or categorization. We then assessed the impact of this exposure on an abridged Greeble training paradigm. As expected, participants who played Yufo games or Greeble categorization games showed a significant basic-level advantage for Greebles in the abridged training paradigm, typical of novices. However, participants who played the Greeble identity game showed a reduced basic-level advantage, suggesting that individuation without labels may be sufficient to acquire perceptual expertise.  相似文献   

18.
Comparing Exemplar- and Rule-Based Theories of Categorization   总被引:4,自引:0,他引:4  
ABSTRACT— We address whether human categorization behavior is based on abstracted rules or stored exemplars. Although predictions of both theories often mimic each other in many designs, they can be differentiated. Experimental data reviewed does not support either theory exclusively. We find participants use rules when the stimuli are confusable and exemplars when they are distinct. By drawing on the distinction between simple stimuli (such as lines of various lengths) and complex ones (such as words and objects), we offer a dynamic view of category learning. Initially, categorization is based on rules. During learning, suitable features for discriminating stimuli may be gradually learned. Then, stimuli can be stored as exemplars and used to categorize novel stimuli without recourse to rules.  相似文献   

19.
Linguistic labels affect inductive generalization; however, the mechanism underlying these effects remains unclear. According to one similarity-based model, SINC (similarity, induction, naming, and categorization), early in development labels are features of objects contributing to the overall similarity of compared entities, with early induction being similarity based. If this is the case, then not only identical but also phonologically similar labels may contribute to the overall similarity and thus to induction. These predictions were tested in a series of experiments with 5-year-olds and adults. In Experiments 1-5 participants performed a label extension task, whereas in Experiment 6 they performed a feature induction task. Results indicate that phonological similarity contributes to early induction and support the notion that for young children labels are features of objects.  相似文献   

20.
We explore different ways in which the human visual system can adapt for perceiving and categorizing the environment. There are various accounts of supervised (categorical) and unsupervised perceptual learning, and different perspectives on the functional relationship between perception and categorization. We suggest that common experimental designs are insufficient to differentiate between hypothesized perceptual learning mechanisms and reveal their possible interplay. We propose a relatively underutilized way of studying potential categorical effects on perception, and we test the predictions of different perceptual learning models using a two-dimensional, interleaved categorization-plus-reconstruction task. We find evidence that the human visual system adapts its encodings to the feature structure of the environment, uses categorical expectations for robust reconstruction, allocates encoding resources with respect to categorization utility, and adapts to prevent miscategorizations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号