首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U‐shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U‐shaped trend.  相似文献   

2.
In the Appearance/Reality (AR) task some 3‐ and 4‐year‐old children make perseverative errors: they choose the same word for the appearance and the function of a deceptive object. Are these errors specific to the AR task, or signs of a general question‐answering problem? Preschoolers completed five tasks: AR; simple successive forced‐choice question pairs (QP); flexible naming of objects (FN); working memory (WM) span; and indeterminacy detection (ID). AR errors correlated with QP errors. Insensitivity to indeterminacy predicted perseveration in both tasks. Neither WM span nor flexible naming predicted other measures. Age predicted sensitivity to indeterminacy. These findings suggest that AR tests measure a pragmatic understanding; specifically, different questions about a topic usually call for different answers. This understanding is related to the ability to detect indeterminacy of each question in a series. AR errors are unrelated to the ability to represent an object as belonging to multiple categories, to working memory span, or to inhibiting previously activated words.  相似文献   

3.
Ambady, Krabbenhoft, Hogan, and Rosenthal (2006) demonstrated that “thin slices” or very brief observations of behavior are not only sufficient for drawing accurate automatic trait inferences, they actually improve accuracy, relative to inferences based on larger amounts of information. Too much information, too much knowledge, or too much analysis can reduce the accuracy of intuitive judgment. Who benefits most and what types of judgments benefit most from thin‐slice data? When should people trust their intuition? The answers to these questions depend on informational variables, such as feedback quality and the consequences of inferential errors (Hogarth, 2001). Evidence is reviewed suggesting that consumers and managers should trust their intuition only when high quality (frequent, prompt, and diagnostic) feedback is available and when inferential errors are consequential and therefore easy to detect.  相似文献   

4.
Computerized assessment of knowledge is one of the most promising applications of knowledge space theory. The first requirement of any such assessment is to produce as accurate a picture as possible of the organization of the knowledge. So far, all procedures designed for this purpose rely exclusively on the query of an expert. Several experiments have shown the limitations of that approach in realistic conditions. One source of difficulty is the very high sensitivity of the querying algorithms to an expert's mistakes. Another source of difficulty concerns the validity of the expert: His or her knowledge structure may diverge greatly from the knowledge structure of the actual population. To solve these difficulties, the present paper proposes and simulates a two-step procedure. The first step implements a modification of an existing querying procedure. The modification implements an error-handling mechanism which lowers the incidence of an expert's careless errors. The second step consists in a refinement mechanism which relies on the knowledge assessments of many subjects to refine the very structure used by these assessments. For each step, it is shown that the underlying knowledge structure can be recovered. Copyright 2000 Academic Press.  相似文献   

5.
There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.  相似文献   

6.
In-group favoritism is ubiquitous and associated with intergroup conflict, yet is little understood from a biological perspective. A fundamental question regarding the structure of favoritism is whether it is inflexibly directed toward distinct, "essentialist" categories, such as ethnicity and race, or is deployed in a context-sensitive manner. In this article, we report the first study (to our knowledge) of the genetic and environmental structure of in-group favoritism in the religious, ethnic, and racial domains. We contrasted a model of favoritism based on a single domain-general central affiliation mechanism (CAM) with a model in which each domain was influenced by specific mechanisms. In a series of multivariate analyses, utilizing a large, representative sample of twins, models containing only the CAM or essentialist domains fit the data poorly. The best-fitting model revealed that a biological mechanism facilitates affiliation with arbitrary groups and exists alongside essentialist systems that evolved to process salient cues, such as shared beliefs and ancestry.  相似文献   

7.
Multiple‐choice tests are frequently used in personnel selection contexts to measure knowledge and abilities. Option weighting is an alternative multiple‐choice scoring procedure that awards partial credit for incomplete knowledge reflected in applicants’ distractor choices. We investigated whether option weights should be based on expert judgment or on empirical data when trying to outperform conventional number‐right scoring in terms of reliability and validity. To obtain generalizable results, we used repeated random sub‐sampling validation and found that empirical option weighting, but not expert option weighting, increased the reliability of a knowledge test. Neither option weighting procedure improved test validity. We recommend to improve the reliability of existing ability and knowledge tests used for personnel selection by computing and publishing empirical option weights.  相似文献   

8.
The appeal to expert opinion is an argument form that uses the verdict of an expert to support a position or hypothesis. A previous scheme‐based treatment of the argument form is formalized within a Bayesian network that is able to capture the critical aspects of the argument form, including the central considerations of the expert's expertise and trustworthiness. We propose this as an appropriate normative framework for the argument form, enabling the development and testing of quantitative predictions as to how people evaluate this argument, suggesting that such an approach might be beneficial to argumentation research generally. We subsequently present two experiments as an example of the potential for future research in this vein, demonstrating that participants' quantitative ratings of the convincingness of a proposition that has been supported with an appeal to expert opinion were broadly consistent with the predictions of the Bayesian model.  相似文献   

9.
In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling‐based approximations are a common way to implement Bayesian inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst‐case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling‐based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource‐constrained cognition.  相似文献   

10.
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause‐effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre‐training (or even post‐training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue‐outcome co‐occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge.  相似文献   

11.
12.
How do food companies use package design to communicate healthfulness? The present study addresses this question by investigating the most typical implicit package design elements used by food companies for their health‐positioned food products. Using a content analysis on the packaging design of 12 food product categories across two countries (Denmark and the United States), our findings indicate that (a) implicit package design elements (colors, imagery, material, and shape) differ between health‐positioned and regular products, and (b) these differences are product category specific rather than universal. Our results contribute to knowledge on how package design is used as a health communication tool.  相似文献   

13.
The main purpose of this research is to identify the underlying cognitive structure of brand equity. Existing research on brand equity is used to identify 4 cognitive “components” of customer‐based brand equity. These are labeled as global brand attitude, strength of preference, brand knowledge, and brand heuristic. A conceptual framework of how these components (or subconstructs) are interrelated is proposed and empirically tested using data from 2 frequently purchased product categories. Covariance structure modeling is used as the analysis methodology. The results indicate that all the identified cognitive components are important determinants of customer‐based brand equity. Specifically, the brand heuristic component serves as an important mediator in 2 “cognitive chains” that link global brand attitude to brand knowledge and global brand attitude to strength of preference, respectively. The study findings have important implications for designing equity maintenance strategies for frequently purchased products.  相似文献   

14.
How do children use informant niceness, meanness, and expertise when choosing between informant claims and crediting informants with knowledge? In Experiment 1, preschoolers met two experts providing conflicting claims for which only one had relevant expertise. Five‐year‐olds endorsed the relevant expert's claim and credited him with knowledge more often than 3‐year‐olds. In Experiment 2, niceness/meanness information was added. Although children most strongly preferred the nice relevant expert, the children often chose the nice irrelevant expert when the relevant one was mean. In Experiment 3, a mean expert was paired with a nice non‐expert. Although this nice informant had no expertise, preschoolers continued to endorse his claims and credit him with knowledge. Also noteworthy, children in all three experiments seemed to struggle more to choose the relevant expert's claim than to credit him with knowledge. Together, these experiments demonstrate that niceness/meanness information can powerfully influence how children evaluate informants.  相似文献   

15.
According to Crain and Nakayama (1987) , when forming complex yes/no questions, children do not make errors such as Is the boy who smoking is crazy? because they have innate knowledge of structure dependence and so will not move the auxiliary from the relative clause. However, simple recurrent networks are also able to avoid such errors, on the basis of surface distributional properties of the input ( Lewis & Elman, 2001 ; Reali & Christiansen, 2005 ). Two new elicited production studies revealed that (a) children occasionally produce structure‐dependence errors and (b) the pattern of children's auxiliary‐doubling errors (Is the boy who is smoking is crazy?) suggests a sensitivity to surface co‐occurrence patterns in the input. This article concludes that current data do not provide any support for the claim that structure dependence is an innate constraint, and that it is possible that children form a structure‐dependent grammar on the basis of exposure to input that exhibits this property.  相似文献   

16.
Statistical inference (including interval estimation and model selection) is increasingly used in the analysis of behavioral data. As with many other fields, statistical approaches for these analyses traditionally use classical (i.e., frequentist) methods. Interpreting classical intervals and p‐values correctly can be burdensome and counterintuitive. By contrast, Bayesian methods treat data, parameters, and hypotheses as random quantities and use rules of conditional probability to produce direct probabilistic statements about models and parameters given observed study data. In this work, we reanalyze two data sets using Bayesian procedures. We precede the analyses with an overview of the Bayesian paradigm. The first study reanalyzes data from a recent study of controls, heavy smokers, and individuals with alcohol and/or cocaine substance use disorder, and focuses on Bayesian hypothesis testing for covariates and interval estimation for discounting rates among various substance use disorder profiles. The second example analyzes hypothetical environmental delay‐discounting data. This example focuses on using historical data to establish prior distributions for parameters while allowing subjective expert opinion to govern the prior distribution on model preference. We review the subjective nature of specifying Bayesian prior distributions but also review established methods to standardize the generation of priors and remove subjective influence while still taking advantage of the interpretive advantages of Bayesian analyses. We present the Bayesian approach as an alternative paradigm for statistical inference and discuss its strengths and weaknesses.  相似文献   

17.
People frequently miss contradictions with stored knowledge; for example, readers often fail to notice any problem with a reference to the Atlantic as the largest ocean. Critically, such effects occur even though participants later demonstrate knowing the Pacific is the largest ocean (the Moses Illusion) [Erickson, T. D., &; Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning &; Verbal Behavior, 20, 540–551]. We investigated whether such oversights disappear when erroneous references contradict information in one's expert domain, material which likely has been encountered many times and is particularly well-known. Biology and history graduate students monitored for errors while answering biology and history questions containing erroneous presuppositions (“In what US state were the forty-niners searching for oil?”). Expertise helped: participants were less susceptible to the illusion and less likely to later reproduce errors in their expert domain. However, expertise did not eliminate the illusion, even when errors were bolded and underlined, meaning that it was unlikely that people simply skipped over errors. The results support claims that people often use heuristics to judge truth, as opposed to directly retrieving information from memory, likely because such heuristics are adaptive and often lead to the correct answer. Even experts sometimes use such shortcuts, suggesting that overlearned and accessible knowledge does not guarantee retrieval of that information.  相似文献   

18.
Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as “How many animals of each kind did Moses take on the Ark?” despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.  相似文献   

19.
Two‐level structural equation models with mixed continuous and polytomous data and nonlinear structural equations at both the between‐groups and within‐groups levels are important but difficult to deal with. A Bayesian approach is developed for analysing this kind of model. A Markov chain Monte Carlo procedure based on the Gibbs sampler and the Metropolis‐Hasting algorithm is proposed for producing joint Bayesian estimates of the thresholds, structural parameters and latent variables at both levels. Standard errors and highest posterior density intervals are also computed. A procedure for computing Bayes factor, based on the key idea of path sampling, is established for model comparison.  相似文献   

20.
Many current models of memory are specified with enough detail to make predictions about patterns of errors in memory tasks. However, there are often not enough empirical data available to test these predictions. We report two experiments that examine the relative frequency of fill‐in and infill errors. In immediate serial recall tasks, subjects sometimes incorrectly recall item N too soon, placing it in position N?1. The error of interest is which item is recalled after this initial mistake. A fill‐in error is the tendency to recall item N?1 next, whereas an infill error is the tendency to recall item N+1 next. Both experiments reveal more fill‐in than infill errors, not only overall but at each possible error location throughout the list. The overall ratio is approximately 2:1. We conclude that none of the currently existing models adequately accounts for fill‐in and infill errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号