全文获取类型
收费全文 | 216篇 |
免费 | 11篇 |
国内免费 | 8篇 |
专业分类
235篇 |
出版年
2023年 | 3篇 |
2022年 | 6篇 |
2021年 | 6篇 |
2020年 | 5篇 |
2019年 | 4篇 |
2018年 | 8篇 |
2017年 | 6篇 |
2016年 | 8篇 |
2015年 | 5篇 |
2014年 | 3篇 |
2013年 | 15篇 |
2012年 | 3篇 |
2011年 | 5篇 |
2010年 | 3篇 |
2009年 | 7篇 |
2008年 | 4篇 |
2007年 | 5篇 |
2006年 | 7篇 |
2005年 | 8篇 |
2004年 | 4篇 |
2003年 | 1篇 |
2002年 | 7篇 |
2001年 | 3篇 |
2000年 | 6篇 |
1998年 | 4篇 |
1997年 | 3篇 |
1996年 | 5篇 |
1995年 | 3篇 |
1994年 | 5篇 |
1993年 | 4篇 |
1992年 | 8篇 |
1991年 | 7篇 |
1990年 | 6篇 |
1989年 | 9篇 |
1988年 | 3篇 |
1987年 | 7篇 |
1986年 | 6篇 |
1985年 | 7篇 |
1984年 | 6篇 |
1983年 | 5篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 6篇 |
1977年 | 1篇 |
排序方式: 共有235条查询结果,搜索用时 15 毫秒
51.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.This work was partially supported by the Program Statistics Research Project at Educational Testing Service. 相似文献
52.
Simple procedures are described for obtaining maximum likelihood estimates of the location and uncertainty parameters of the
Hefner model. This model is a probabilistic, multidimensional scaling model, which assigns a multivariate normal distribution
to each stimulus point. It is shown that for such a model, standard nonmetric and metric algorithms are not appropriate.
A procedure is also described for constructing incomplete data sets, by taking into consideration the degree of familiarity
the subject has for each stimulus. Maximum likelihood estimates are developed both for complete and incomplete data sets.
This research was supported by National Science Grant No. SOC76-20517. The first author would especially like to express his
gratitude to the Netherlands Institute for Advanced Study for its very substantial help with this research. 相似文献
53.
Using the theory of pseudo maximum likelihood estimation the asymptotic covariance matrix of maximum likelihood estimates for mean and covariance structure models is given for the case where the variables are not multivariate normal. This asymptotic covariance matrix is consistently estimated without the computation of the empirical fourth order moment matrix. Using quasi-maximum likelihood theory a Hausman misspecification test is developed. This test is sensitive to misspecification caused by errors that are correlated with the independent variables. This misspecification cannot be detected by the test statistics currently used in covariance structure analysis.For helpful comments on a previous draft of the paper we are indebted to Kenneth A. Bollen, Ulrich L. Küsters, Michael E. Sobel and the anonymous reviewers of Psychometrika. For partial research support, the first author wishes to thank the Department of Sociology at the University of Arizona, where he was a visiting professor during the fall semester 1987. 相似文献
54.
Maria T. Barendse Yves Rosseel 《The British journal of mathematical and statistical psychology》2023,76(2):327-352
Pairwise maximum likelihood (PML) estimation is a promising method for multilevel models with discrete responses. Multilevel models take into account that units within a cluster tend to be more alike than units from different clusters. The pairwise likelihood is then obtained as the product of bivariate likelihoods for all within-cluster pairs of units and items. In this study, we investigate the PML estimation method with computationally intensive multilevel random intercept and random slope structural equation models (SEM) in discrete data. In pursuing this, we first reconsidered the general ‘wide format’ (WF) approach for SEM models and then extend the WF approach with random slopes. In a small simulation study we the determine accuracy and efficiency of the PML estimation method by varying the sample size (250, 500, 1000, 2000), response scales (two-point, four-point), and data-generating model (mediation model with three random slopes, factor model with one and two random slopes). Overall, results show that the PML estimation method is capable of estimating computationally intensive random intercept and random slopes multilevel models in the SEM framework with discrete data and many (six or more) latent variables with satisfactory accuracy and efficiency. However, the condition with 250 clusters combined with a two-point response scale shows more bias. 相似文献
55.
Traditional methods for deriving property‐based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is‐a vehicle ) or meronymy/metonymy (e.g., car has wheels ), or unspecified relations (e.g., car — petrol ). We propose a system for the challenging task of automatic, large‐scale acquisition of unconstrained, human‐like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept‐relation‐feature triples (e.g., car be fast , car require petrol , car cause pollution ), which approximate property‐based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human‐generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human‐judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state‐of‐the‐art, while subsequent evaluations exhibit the human‐like character of our generated properties. 相似文献
56.
Chloe A. Hill 《Psychology & health》2013,28(1):41-56
A condom use promotion leaflet was designed for use with older teenagers in schools. The text targeted a series of cognitive and behavioural antecedents of condom use identified in the literature. Given previous evidence that motivational incentives can enhance the effectiveness of health promotion leaflets, the leaflet was presented in conjunction with a quiz and prize draw. Students were randomly assigned to either the intervention condition or a (no leaflet or incentive) control condition. Measures were taken immediately, pre-intervention and 4 weeks later from 404 students. The 20-min intervention successfully promoted six of the eight measured cognitions, namely (1) attitude towards using condoms with a new partner (2) attitude towards using condoms with a steady partner (3) normative beliefs in relation to preparatory actions (4) self-efficacy in relation to both preparatory actions and (5) condom use (6) intention to use condoms, as well as three measured preparatory actions, that is, purchasing condoms, carrying condoms and discussing condom use. The intervention did not increase condom use with steady or new partners but power to test intervention impact on condom use was curtailed. 相似文献
57.
It is very important to choose appropriate variables to be analyzed in multivariate analysis when there are many observed variables such as those in a questionnaire. What is actually done in scale construction with factor analysis is nothing but variable selection.In this paper, we take several goodness-of-fit statistics as measures of variable selection and develop backward elimination and forward selection procedures in exploratory factor analysis. Once factor analysis is done for a certain numberp of observed variables (thep-variable model is labeled the current model), simple formulas for predicted fit measures such as chi-square, GFI, CFI, IFI and RMSEA, developed in the field of the structural equation modeling, are provided for all models obtained by adding an external variable (so that the number of variables isp + 1) and for those by deleting an internal variable (so that the number isp – 1), provided that the number of factors is held constant.A programSEFA (Stepwise variable selection in Exploratory Factor Analysis) is developed to actually obtain a list of the fit measures for all such models. The list is very useful in determining which variable should be dropped from the current model to improve the fit of the current model. It is also useful in finding a suitable variable that may be added to the current model. A model with more appropriate variables makes more stable inference in general.The criteria traditionally often used for variable selection is magnitude of communalities. This criteria gives a different choice of variables and does not improve fit of the model in most cases.The URL of the programSEFA is http://koko15.hus.osaka-u.ac.jp/~harada/factor/stepwise/. 相似文献
58.
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models.
Applications considered include survival or duration models, models for rankings, small area estimation with census information,
models for ordinal responses, item response models with guessing, randomized response models, unfolding models, latent class
models with random effects, multilevel latent class models, models with log-normal latent variables, and zero-inflated Poisson
models with random effects. Some of the ideas are illustrated by estimating an unfolding model for attitudes to female work
participation.
We wish to thank The Research Council of Norway for a grant supporting our collaboration. 相似文献
59.
The PARELLA model is a probabilistic parallelogram model that can be used for the measurement of latent attitudes or latent
preferences. The data analyzed are the dichotomous responses of persons to items, with a one (zero) indicating agreement (disagreement)
with the content of the item. The model provides a unidimensional representation of persons and items. The response probabilities
are a function of the distance between person and item: the smaller the distance, the larger the probability that a person
will agree with the content of the item. This paper discusses how the approach to differential item functioning presented
by Thissen, Steinberg, and Wainer can be implemented for the PARELLA model.
Requests for the PARELLA software should be sent to Iec Progamma PO Box 841, 9700 AV Groningen, The Netherlands. 相似文献
60.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important
advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on
fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the
number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension
be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that
the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate
that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion
of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature,
combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution
with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery
of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling
to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio
chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample
of real data from a test of teacher qualifications. 相似文献