首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the most crucial issues in knowledge space theory is the construction of the so-called knowledge structures. In the present paper, a new data-driven procedure for large data sets is described, which overcomes some of the drawbacks of the already existing methods. The procedure, called k-states, is an incremental extension of the k-modes algorithm, which generates a sequence of locally optimal knowledge structures of increasing size, among which a “best” model is selected. The performance of k-states is compared to other two procedures in both a simulation study and an empirical application. In the former, k-states displays a better accuracy in reconstructing knowledge structures; in the latter, the structure extracted by k-states obtained a better fit.  相似文献   

2.
3.
4.
Epskamp  Sacha 《Psychometrika》2020,85(1):206-231

Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGMs)—an undirected network model of partial correlations—between observed variables of cross-sectional data or single-subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics, which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rests on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.

  相似文献   

5.
A computer program is described that generates random form stimuli by a method parallel to that described by Attneave & Arnoult (1956) for construction forms with both angles and arcs in their perimeters. The program also performs a physical analysis of the forms by computing the values of a large number of physical variables describing each form. Already existing forms also may be analyzed by the program. Thus, it has the capability to perform physical analyses of approximations to natural forms.  相似文献   

6.
7.
8.
9.
10.
This paper addresses the problem of upgrading functional information to knowledge. Functional information is defined as syntactically well-formed, meaningful and collectively opaque data. Its use in the formal epistemology of information theories is crucial to solve the debate on the veridical nature of information, and it represents the companion notion to standard strongly semantic information, defined as well-formed, meaningful and true data. The formal framework, on which the definitions are based, uses a contextual version of the verificationist principle of truth in order to connect functional to semantic information, avoiding Gettierization and decoupling from true informational contents. The upgrade operation from functional information uses the machinery of epistemic modalities in order to add data localization and accessibility as its main properties. We show in this way the conceptual worthiness of this notion for issues in contemporary epistemology debates, such as the explanation of knowledge process acquisition from information retrieval systems, and open data repositories.  相似文献   

11.
12.
The present paper proposes a model of knowledge sharing, in which coworker congruence, outcome interdependence, perceived organizational support, and procedural justice influence knowledge sharing indirectly through the mediation of instrumental ties and expressive ties, and examined gender differences in causal connections within the model. In a sample of employees in Taiwan, it was shown that the influence of instrumental ties on knowledge sharing is stronger for females than for males; the influence of expressive ties on knowledge sharing is stronger for males than for females; the influence of coworker congruence on expressive ties is stronger for females than for males; the influence of outcome interdependence on instrumental ties is stronger for females than for males; and the influence of perceived organizational support on instrumental ties is stronger for males than for females.  相似文献   

13.
An information-theoretic framework is used to analyze the knowledge content in multivariate cross classified data. Several related measures based directly on the information concept are proposed: the knowledge content (S) of a cross classification, its terseness (Zeta), and the separability (Gamma X ) of one variable, given all others. Exemplary applications are presented which illustrate the solutions obtained where classical analysis is unsatisfactory, such as optimal grouping, the analysis of very skew tables, or the interpretation of well-known paradoxes. Further, the separability suggests a solution for the classic problem of inductive inference which is independent of sample size.  相似文献   

14.
A new method is proposed for the statistical analysis of dyadic social interaction data measured over time. The data to be studied are assumed to be realizations of a social network of a fixed set of actors interacting on a single relation. The method is based on loglinear models for the probabilities for various dyad (or actor pair) states and generalizes the statistical methods proposed by Holland and Leinhardt (1981), Fienberg, Meyer, & Wasserman (1985), and Wasserman (1987) for social network data. Two statistical models are described: the first is an associative approach that allows for the study of how the network has changed over time; the second is a predictive approach that permits the researcher to model one time point as a function of previous time points. These approaches are briefly contrasted with earlier methods for the sequential analysis of social networks and are illustrated with an example of longitudinal sociometric data.Research support provided by National Science Foundation Grant #SES84-08626 to the University of Illinois at Urbana-Champaign and by a predoctoral traineeship awarded to the second author by the Quantitative Methods Program of the Department of Psychology, University of Illinois at Urbana-Champaign, funded by ADAHMA, National Research Service Award #MH14257. We thank the editor and three anonymous referees for helpful comments.This paper is based on research presented at the 1986 Annual Meeting of the Psychometric Society, Toronto, Ontario, June, 1986.  相似文献   

15.
A computer program for programming schedules of reinforcement is described. Students can use the program to experience schedules of reinforcement that are typically used with nonhuman subjects. A cumulative recording of a student’s responding can be shown on the screen and/or printed with the computer’s printer. The program can also be used to program operant schedules for animal subjects. The program was tested with human subjects experiencing fixed ratio, variable ratio, fixed interval, and variable interval schedules. Performance for human subjects on a given schedule was similar to performance for nonhuman subjects on the same schedule.  相似文献   

16.
17.
Models for quantitative (or numerical) testing like e.g. educational testing have a relatively long tradition in psychology, while the qualitative (or nonnumerical) approach to psychometrics is more recent. The approach presented in this paper can be regarded as an attempt to integrate, to some extent, the numerical and nonnumerical fields. In numerical testing a subject is characterized by some real-valued parameter representing her level or ability. In the nonnumerical approach the knowledge state of an individual is represented by the subset of problems that the individual is capable of solving. We propose a model in which the relationship between the ability levels and the knowledge states is worked out on a probabilistic basis. The central idea is that the ability parameters and the knowledge states are not independent. A logistic model is derived which specifies the probabilities of the knowledge states conditional on the ability levels. We show that the Rasch model arises as a special case of the proposed model.  相似文献   

18.
19.
This study attempts to determine if a relationship exists between first-to-second-year retention and social network variables for a cohort of first-year students at a small liberal arts college. The social network is reconstructed using not survey data as is most common, but rather using archival data from a student information system. Each student is given a retention score and an attrition score based on the behavior of their immediate relationships in the network. Those scores are then entered into a logistic regression that includes tradition background and performance variables that are traditionally significantly related to retention. Students?? friends?? retention and attrition behaviors are found to have a greater impact on retention that any background or performance variable.  相似文献   

20.
Netscal: A network scaling algorithm for nonsymmetric proximity data   总被引:1,自引:0,他引:1  
A simple property of networks is used as the basis for a scaling algorithm that represents nonsymmetric proximities as network distances. The algorithm determines which vertices are directly connected by an arc and estimates the length of each arc. Network distance, defined as the minimum pathlength between vertices, is assumed to be a generalized power function of the data. The derived network structure, however, is invariant across monotonic transformations of the data. A Monte Carlo simulation and applications to eight sets of proximity data support the practical utility of the algorithm.I am grateful to Roger Shepard and Amos Tversky for their helpful comments and guidance throughout this project. The work was supported by National Science Foundation Grant BNS-75-02806 to Roger Shepard and a National Science Foundation Graduate Fellowship to the author. Parts of this paper were drawn from a doctoral dissertation submitted to Stanford University (Hutchinson, 1981).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号