首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of that matrix through singular value decomposition (SVD). This article presents LANSE, an SVD algorithm specifically designed for LSA, which allows extremely large matrices to be processed using off-the-shelf computer hardware.  相似文献   

3.
The research reported here focuses on the early acquisition of event structure in German. Based on longitudinal studies from 5 normally developing (ND) and 6 language-impaired (LI) children, a model of "event structural bootstrapping" is presented that spells out how ND children log into the verb lexicon. They project a target-consistent event tree, depicting the head-of-event of transitions. Young LI children, failing to employ this bootstrapping strategy, resort to radically underspecified event representations. The results from a truth-value judgment experiment with 16 ND and 16 LI children showed that ND children perform correctly on transitional verbs, while LI children perform at chance level on the same tasks. These findings are accounted for by the model of event structural bootstrapping to the extent that LI children lack an explicit representation of the head-of-event.  相似文献   

4.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

5.
Psychologists often use special computer programs to perform meta-analysis. Until recently, this had been necessary because standard statistical packages did not provide procedures for such analysis. This paper introduces linear mixed models as a framework for meta-analysis in psychological research, using a popular general purpose statisticalpackage, SAS. The approach is illustrated with three examples, usingsas peoc mixed.  相似文献   

6.
This paper explores some of the implications the statistical process control (SPC) methodology described by Pfadt and Wheeler (1995) may have for analyzing more complex performances and contingencies in human services or health care environments at an organizational level. Service delivery usually occurs in an organizational system that is characterized by functional structures, high levels of professionalism, subunit optimization, and organizational suboptimization. By providing a standard set of criteria and decision rules, SPC may provide a common interface for data-based decision making, may bring decision making under the control of the contingencies that are established by these rules rather than the immediate contingencies of data fluctuation, and may attenuate escalation of failing treatments. SPC is culturally consistent with behavior analysis, sharing an emphasis on data-based decisions, measurement over time, and graphic analysis of data, as well as a systemic view of organizations.  相似文献   

7.
Tryon WW  Lewis C 《心理学方法》2008,13(3):272-277
Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H-sub-0 is not evidence supportive of it. Tests of statistical equivalence are needed. This article corrects the inferential confidence interval (ICI) reduction factor introduced by W. W. Tryon (2001) and uses it to extend his discussion of statistical equivalence. This method is shown to be algebraically equivalent with D. J. Schuirmann's (1987) use of 2 one-sided t tests, a highly regarded and accepted method of testing for statistical equivalence. The ICI method provides an intuitive graphic method for inferring statistical difference as well as equivalence. Trivial difference occurs when a test of difference and a test of equivalence are both passed. Statistical indeterminacy results when both tests are failed. Hybrid confidence intervals are introduced that impose ICI limits on standard confidence intervals. These intervals are recommended as replacements for error bars because they facilitate inferences.  相似文献   

8.
Analyses of training or competition environments traditionally tend to adopt a product-oriented perspective through the recording and statistical analysis of performance outcomes. Consequently, most investigations continue to ignore the processes underpinning functional achievement of outcomes, therefore, failing to examine contextual effects of how and why performance evolves. This critical research note highlights the need for sport psychologists, pedagogues, and other applied scientists to consider a range of alternative methodological designs for research to monitor and explain processes inherent to performance preparation. These process-oriented designs require the continuous flow and exchange of performance data between training and competition, mediated by practitioners’ experiential knowledge. We endorse a triangulation of information defined as a ‘competition-coach-training’ triad which needs to be better acknowledged. Redirecting the focus of practice and research away from a product-oriented (driven by broad statistical data patterns), towards a process-oriented perspective (examined through in-depth contextual analyses) may re-calibrate the theory-practice alignment.  相似文献   

9.
10.
Many experiments in psychology yield both reaction time and accuracy data. However, no off-the-shelf methods yet exist for the statistical analysis of such data. One particularly successful model has been the diffusion process, but using it is difficult in practice because of numerical, statistical, and software problems. We present a general method for performing diffusion model analyses on experimental data. By implementing design matrices, a wide range of across-condition restrictions can be imposed on model parameters, in a flexible way. It becomes possible to fit models with parameters regressed onto predictors. Moreover, data analytical tools are discussed that can be used to handle various types of outliers and contaminants. We briefly present an easy-touse software tool that helps perform diffusion model analyses.  相似文献   

11.
Commonality analysis is a procedure for decomposing R2 in multiple regression analyses into the percent of variance in the dependent variable associated with each independent variable uniquely, and the proportion of explained variance associated with the common effects of predictors. Commonality analysis thus sheds additional light on the magnitude of an obtained multivariate relationship by identifying the relative importance of all independent variables, findings which can be of theoretical and practical significance. In this paper we offer a brief explication of commonality analysis; a step-by-step discussion of how communication researchers may perform commonality analyses using output from computer-assisted statistical analysis programs like SPSS; and we provide an extended example illustrating a commonality analysis.  相似文献   

12.
This paper describes SHAPA Version 2.01, an interactive program for performing verbal protocol analysis. Verbal protocol analysis is a time-consuming activity that has hitherto typically been done by hand, whereas SHAPA represents an attempt to build a software environment to aid (but not replace) researchers in this activity. SHAPA allows researchers to develop an encoding vocabulary, to apply it to raw verbal protocol files, and to perform various types of data aggregation and data reduction. When performing verbal protocol analysis, researchers often try out different possible coding schemes before settling on the most effective one. SHAPA has been designed to support quick alterations to an encoding vocabulary and to support the rigorous statistical analysis of content and patterns (sequential data analysis) in protocol data. It is intended as an exploratory, as well as analytical, tool and has been designed to be easy for novices to learn and use, yet fluid and powerful for experts. A prototype version of SHAPA has already been used by a sample of researchers, and their experiences and requests have guided the programming of the present much more powerful version.  相似文献   

13.
There are many objections to statistical discrimination in general and racial profiling in particular. One objection appeals to the idea that people have a right to be treated as individuals. Statistical discrimination violates this right because, presumably, it involves treating people simply on the basis of statistical facts about groups to which they belong while ignoring non-statistical evidence about them. While there is something to this objection??there are objectionable ways of treating others that seem aptly described as failing to treat them as individuals??it needs to be articulated carefully. First, most people accept that many forms of statistical discrimination are morally unproblematic, let alone morally justified all things considered. Second, even treating people on the basis of putative non-statistical evidence relies on generalizations. Once we construe treating someone as an individual in a way that respects this fact, it becomes apparent: (1) that statistical discrimination is compatible with treating people as individuals, and (2) that one may fail to treat people as individuals even without engaging in statistical discrimination. Finally, there are situations involving the expression of messages of inclusion where we think it is good, morally speaking, that we are not treated as individuals.  相似文献   

14.
Middle- and lower-class children who had been treated by E in a warm or aloof manner were given a discrimination learning task under one of six conditions forming a 3 × 2 design: three reinforcement types (Verbalintoned, Verbal-nonintoned, or Symbolic) and reinforcement for correct or incorrect responses. In accordance with Zigler's valence theory of social reinforcement, lower-class girls and middle-class children of both sexes tended to perform better when they had previously received warm treatment. No significant differences among the three types of reinforcers occurred in either SE group, thus failing to support the implications of several theories about the relative effectiveness of social and nonsocial reinforcers and tonality factors related to SES.  相似文献   

15.
Gender and self-esteem.   总被引:12,自引:0,他引:12  
Where does self-esteem (SE) come from? Three experiments explored the idea that men's and women's SE arise, in part, from different sources. It was hypothesized that SE is related to successfully measuring up to culturally mandated, gender-appropriate norms--separation and independence for men and connection and interdependence for women. Results from Study 1 suggested that men's SE can be linked to a individuation process in which one's personal distinguishing achievements are emphasized. Results from Study 2 suggested that women's SE can be linked to a process in which connections and attachments to important others are emphasized. Study 3 demonstrated that failing to perform well on gender-appropriate tasks engendered a defensive, compensatory reaction, but only in subjects with high SE. These findings are discussed with regard to their implications for the structure and dynamics of the self.  相似文献   

16.
The Regulatory Focus Theory maintains that people may focus on achieving positive outcomes (have a promotion focus) or avoiding negative ones (have a prevention focus) when they pursue their goals. Under a promotion focus, people would formulate as many strategies as possible to attain their goal, and hence be fluent in idea generation when they perform a creative task. In contrast, people under a prevention focus would seek to avoid the negative consequences of failing to attain a valued goal, and persist even when the likelihood of success in a creativity situation is small. We tested these predictions in a study, where regulatory focus was measured as an individual differences variable (Part 1) and induced by a goal framing manipulation (Part 2). The results supported our predictions, and suggested that creative accomplishment requires flexible alternation of regulatory foci at the different stages of creative undertakings.  相似文献   

17.
Some user-oriented compact data analysis programs are described. One program is useful for transforming and reformatting data, and the others perform analysis of variance and multiple regression. Along with other programs not described here, these form an adequate statistical package without sacrificing ease of use or computational power.  相似文献   

18.
Across three experiments, people escalated commitment more frequently to a failing prosocial initiative (i.e., an initiative that had the primary aim of improving the outcomes of others in need) than they did to a failing egoistic initiative (i.e., an initiative that had the primary aim of improving the outcomes of the decision-maker). A test of mediation (Study 1b) and a test of moderation (Study 2) each provided evidence that a desire for a positive moral self-regard underlies people’s tendency to escalate commitment more frequently to failing prosocial initiatives than to failing egoistic initiatives. We discuss the implications of these findings for the resource-allocation decisions that people and organizations face when undertaking initiatives with prosocial aims.  相似文献   

19.
To study the physiologic mechanism of the brain during different motor imagery (MI) tasks, the authors employed a method of brain-network modeling based on time–frequency cross mutual information obtained from 4-class (left hand, right hand, feet, and tongue) MI tasks recorded as brain–computer interface (BCI) electroencephalography data. The authors explored the brain network revealed by these MI tasks using statistical analysis and the analysis of topologic characteristics, and observed significant differences in the reaction level, reaction time, and activated target during 4-class MI tasks. There was a great difference in the reaction level between the execution and resting states during different tasks: the reaction level of the left-hand MI task was the greatest, followed by that of the right-hand, feet, and tongue MI tasks. The reaction time required to perform the tasks also differed: during the left-hand and right-hand MI tasks, the brain networks of subjects reacted promptly and strongly, but there was a delay during the feet and tongue MI task. Statistical analysis and the analysis of network topology revealed the target regions of the brain network during different MI processes. In conclusion, our findings suggest a new way to explain the neural mechanism behind MI.  相似文献   

20.
Deborah G. Mayo 《Synthese》1983,57(3):297-340
Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific claims based upon NPT has been called into question. The purpose of this paper is first to clarify this question by examining the conceptions of (I) the function served by NPT in science, and (II) the requirements of an objective theory of statistics upon which attacks on NPT's objectivity are based. Our grounds for rejecting these conceptions suggest altered conceptions of (I) and (II) that might avoid such attacks. Second, we propose a reformulation of NPT, denoted by NPT*, based on these altered conceptions, and argue that it provides an objective theory of statistics. The crux of our argument is that by being able to objectively control error frequencies NPT* is able to objectively evaluate what has or has not been learned from the result of a statistical test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号