首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The equivalence of two multivariate classification schemes is shown when the sizes of the samples drawn from the populations to which assignment is required are identical. One scheme is based on posterior probabilities determined from a Bayesian density function; the second scheme is based on likelihood ratio discriminated scores. Both of these procedures involve prior probabilities; if estimates of these priors are obtained from the identical sample sizes, the equivalence follows.  相似文献   

2.
Latent distance analysis provides a probability model for the non-perfect Guttman scale; the restricted latent distance structure is simpler to compute than the general structure. Since no sampling theory for latent structure analysis is available, the advantages of the general structure cannot be expressed formally. The two structures are compared in terms of their fit to fifteen sets of empirical data. The computation schemes used are summarized.  相似文献   

3.
Connectionist models with the backpropagation learning rule are said to exhibit catastrophic interference (or forgetting) with sequential training. Subsequent works showed that interference can be reduced by using orthogonal inputs. This study investigated, with a more rigorous assessment method, whether all orthogonal inputs lead to comparable extent of interference using three coding schemes. The results revealed large differences between the coding schemes. With larger networks, dense inputs led to severer interference compared with sparse inputs. With smaller networks, all the three schemes led to comparable extent of interference. Therefore, this study proved that not all the orthogonal inputs cause the same extent of interference, and that severity of interference depends on the interaction of the input coding scheme and the network size.  相似文献   

4.
By extending a technique for testing the difference between two dependent correlations developed by Wolfe, a strategy is proposed in a more general matrix context for evaluating a variety of data analysis schemes that are supposed to clarify the structure underlying a set of proximity measures. In the applications considered, a data analysis scheme is assumed to reconstruct in matrix form the given data set (represented as a proximity matrix) based on some specific model or procedure. Thus, an evaluation of the adequacy of reconstruction can be developed by comparing matrices, one containing the original proximities and the second containing the reconstructed values. Possible applications in multidimensional scaling, clustering, and related contexts are emphasized using four broad categories: (a) Given two different reconstructions based on a single data set, does either represent the data significantly better than the other? (b) Given two reconstructions based on a single data set using two different procedures (or possibly, two distinct data sets and a common method), is either reconstruction significantly closer to a particular theoretical structure that is assumed to underlie the data (where the latter is also represented in matrix form)? (c) Given two theoretical structures and one reconstruction based on a single data set, does either represent the reconstruction better than the other? (d) Given a single reconstruction based on one data set, is the information present in the data accounted for satisfactorily by the reconstruction? In all cases, these tasks can be approached by a nonparametric procedure that assesses the similarity in pattern between two appropriately defined matrices. The latter are obtained from the original data, the reconstructions, and/or the theoretical structures. Finally, two numerical examples are given to illustrate the more general discussion.  相似文献   

5.
Models such as that of Olshausen and Field (O&F, 1997 Vision Research 37 3311-3325) and principal components analysis (PCA) have been used to model simple-cell receptive fields, and to try to elucidate the statistical principles underlying visual coding in area V1. They connect the statistical structure of natural images with the statistical structure of the coding used in V1. The O&F model has created particular interest because the basis functions it produces resemble the receptive fields of simple cells. We evaluate these models in terms of their sparseness and dispersal, both of which have been suggested as desirable for efficient visual coding. However, both attributes have been defined ambiguously in the literature, and we have been obliged to formulate specific definitions in order to allow any comparison between models at all. We find that both attributes are strongly affected by any preprocessing (e.g. spectral pseudo-whitening or a logarithmic transformation) which is often applied to images before they are analysed by PCA or the O&F model. We also find that measures of sparseness are affected by the size of the filters--PCA filters with small receptive fields appear sparser than PCA filters with larger spatial extent. Finally, normalisation of the means and variances of filters influences measures of dispersal. It is necessary to control for all of these factors before making any comparisons between different models. Having taken these factors into account, we find that the code produced by the O&F model is somewhat sparser than the code produced by PCA. However, the difference is rather smaller than might have been expected, and a measure of dispersal is required to distinguish clearly between the two models.  相似文献   

6.
Syntactic errors in speech   总被引:1,自引:0,他引:1  
Speech errors can be used to examine the nature of syntactic processing in speech production. Using such evidence, Fay (1980a, 1980b) maintains that deep structure and transformations are psychologically real. However, an interactive activation model that generates surface syntactic structures directly can account for all the data. Most syntactic errors are substitutions: The target phrase structure is replaced by a semantically related structure. Blends of two syntactic structures are also common. Transformations cannot account for much of the data and are not necessary to explain any of them. While it is impossible to prove that transformations do not exist, syntactic theories that do not include transformations have the potential to be psychologically valid.  相似文献   

7.
8.
While much recent work has attempted to code negotiation interaction to identify how individuals use communication tactics in negotiation settings, many coding schemes have been developed to analyze simulated activities and may not be appropriate for the analysis of formal, professional negotiation events. Moreover, most coding research has failed to focus on the relationships between individual tactics and larger communication strategies. This article proposes a coding mechanism sensitive to formal, naturally occurring communication in negotiation settings and capable of identifying strategic use of individual tactics. The coding scheme is then applied to simulated and naturalistic negotiation interaction and the resulting data are assessed, using lag sequential analysis. Significant differences are reflected between the naturalistic and simulated interactions and strong patterns of communication strategy are identified.  相似文献   

9.
Two lines of research—one in psycholinguistics and one in linguistics—are combined to deal with a long-standing problem in both fields: why the “performance structures” of sentences (structures based on experimental data, such as pausing and parsing values) are not fully accountable for by linguistic theories of phrase structure. Two psycholinguistic algorithms that have been used to predict these structures are described and their limitations are examined. A third algorithm, based on the prosodic structures of sentences is then proposed and shown to be a far better predictor of performance structures. It is argued that the experimental data reflect aspects of the linguistic cognitive capacity, and that, in turn, linguistic theory can offer an illuminating account of the data. The prosodic model is shown to have a wider domain of application than temporal organization per se, accounting for parsing judgments as well as pausing performance, and reflecting aspects of syntactic and semantic structure as well as purely prosodic structure. Finally, the algorithm is discussed in light of language processing.  相似文献   

10.
Open-bigram and spatial-coding schemes provide different accounts of how letter position is encoded by the brain during visual word recognition. Open-bigram coding involves an explicit representation of order based on letter pairs, while spatial coding involves a comparison function operating over representations of individual letters. We identify a set of priming conditions (subset primes and reversed interior primes) for which the two types of coding schemes give opposing predictions, hence providing the opportunity for strong scientific inference. Experimental results are consistent with the open-bigram account, and inconsistent with the spatial-coding scheme.  相似文献   

11.
Statistical Issues in the Study of Temporal Data: Daily Experiences   总被引:7,自引:0,他引:7  
This article reviews statistical issues that arise in temporal data, particularly with respect to daily experience data. Issues related to nonindependence of observations, the nature of data structures, and claims of causality are considered. Through the analysis of data from a single subject, we illustrate concomitant time-series analysis, a general method of examining relationships between two or more series having 50 or more observations. We also discuss detection of and remedies for the problems of trend, cycles, and serial dependency that frequently plague temporal data, and present methods of combining the results of concomitant time series across subjects. Issues that arise in pooling cross-sectional and time-series data and statistical models for addressing these issues are considered for the case in which there are appreciably fewer than 50 observations and a moderate number of subjects. We discuss the possibility of using structural equation modeling to analyze data structures in which there are a large number (e.g., 200) of subjects, but relatively few time points, emphasizing the different causal status of synchronous and lagged effects and the types of models that can be specified for longitudinal data structures. Our conclusion highlights some of the issues raised by temporal data for statistical models, notably the important roles of substantive theory, the question being addressed, the properties of the data, and the assumptions underlying each technique in determining the optimal approach to statistical analysis.  相似文献   

12.
The parent-child interaction strongly influences the emotional, behavioural, and cognitive development of young children. The nature of parent-child interactions differs in families with children with autism spectrum disorder (ASD), but research still entails a lot of inconsistencies and there is no consensus as to how these interactions should be coded. The parent-child interaction between sixteen mothers and their child with ASD (M age?=?68 months) and a younger sibling without ASD (M age?=?48 months) in a within-family study were coded using both a global and frequency coding scheme. Global and frequency codes of the same sample were compared to explore the value of each coding method and how they could complement each other. In addition, each coding method’s ability to detect group differences was evaluated. We found that mothers used an interaction style characterized by more support and structure, and clearer instructions in interaction with their children without ASD. In addition, global rating results suggested that within the ASD group, mothers may adapt their behaviour to the specific abilities of their child. Regarding the evaluation of coding method, results showed overlap between conceptually similar constructs included in both coding schemes. Although frequency coding clearly has its value, more qualitative aspects of the interaction were better captured by global rating scales and global rating was more time efficient. For this purpose, global ratings might be preferable over frequency coding.  相似文献   

13.
在亲子沟通观察研究中对观察资料的编码和分析方法有许多种,文章介绍了行为类型编码、行为序列编码、言语内容编码、沟通结构编码等四类主流编码方案,并评析了每种方案的特点和应用背景;最后,指出未来亲子沟通研究编码方案的整合可以考虑如下三个思路:在行为分类基础上进行言语内容分析、从行为序列中发现沟通模式、一般趋势描述和典型个案分析相结合。  相似文献   

14.
The logic of a physical theory reflects the structure of the propositions referring to the behaviour of a physical system in the domain of the relevant theory. It is argued in relation to classical mechanics that the propositional structure of the theory allows truth-value assignment in conformity with the traditional conception of a correspondence theory of truth. Every proposition in classical mechanics is assigned a definite truth value, either ‘true’ or ‘false’, describing what is actually the case at a certain moment of time. Truth-value assignment in quantum mechanics, however, differs; it is known, by means of a variety of ‘no go’ theorems, that it is not possible to assign definite truth values to all propositions pertaining to a quantum system without generating a Kochen–Specker contradiction. In this respect, the Bub–Clifton ‘uniqueness theorem’ is utilized for arguing that truth-value definiteness is consistently restored with respect to a determinate sublattice of propositions defined by the state of the quantum system concerned and a particular observable to be measured. An account of truth of contextual correspondence is thereby provided that is appropriate to the quantum domain of discourse. The conceptual implications of the resulting account are traced down and analyzed at length. In this light, the traditional conception of correspondence truth may be viewed as a species or as a limit case of the more generic proposed scheme of contextual correspondence when the non-explicit specification of a context of discourse poses no further consequences.  相似文献   

15.
This paper contrasts two structural accounts of psychological similarity: structural alignment (SA) and Representational Distortion (RD). SA proposes that similarity is determined by how readily the structures of two objects can be brought into alignment; RD measures similarity by the complexity of the transformation that “distorts” one representation into the other. We assess RD by defining a simple coding scheme of psychological transformations for the experimental materials. In two experiments, this “concrete” version of RD provides compelling fits of the data and compares favourably with SA. Finally, stepping back from particular models, we argue that perceptual theory suggests that transformations and alignment processes should generally be viewed as complementary, in contrast to the current distinction in the literature.  相似文献   

16.

Aperiodic materials, such as two-dimensional quasicrystalline structures, show characteristic elements in transmission electron microscopy images that are closely related to structure elements in the corresponding material. Although the arrangement of these structure elements fundamentally influences the properties of the two-dimensional quasicrystalline structures, it cannot be determined in a satisfactory way by conventional diffraction-based methods. We have developed an automated procedure for analysing images obtained by high-resolution transmission electron microscopy. The method, which is based on image processing, determines the two-dimensional arrangement of the characteristic features and enables subsequent statistical data analysis. It is illustrated with new tiling analyses of highly perfect decagonal Al-Co-Ni quasicrystals.  相似文献   

17.
This paper illustrates two formal models for psychiatric classification. The first model, called a hierarchical or tree structure, requires patient categories to be disjoint or strictly nested. The second model, called the generally overlapping or network model, allows patient categories to cut across each other in a variety of different ways. Thus, patient groups can be disjoint, strictly nested (as in a hierarchy), or partially overlapping. To derive classification schemes consistent with the structural models, two different clustering techniques were applied to interpatient similarity data collected on 50 psychiatric patients. A hierarchical clustering technique was applied to the similarity data to obtain a hierarchical classification. To obtain a generally overlapping classification, Peay's cliquing procedure was applied to the same data. Two criteria were used to compare the clustering solutions. First, a solution's goodness-of-fit to the original data was examined by calculating the proportion of variance accounted for by cluster categories. Second, the predictive accuracy of a solution was analyzed by looking at the categories' ability to predict treatment assignment. The generally overlapping solution produced the best fit to the original similarity data; however, the hierarchical solution's clusters tended to be more readily interpretable in terms of psychiatric syndromes. Both clustering solutions were relatively poor predictors of treatment assignment. It was concluded that the hierarchical and generally overlapping approaches, although not conclusively demonstrated, represented promising models for psychiatric classification.  相似文献   

18.
The field of linear structural equation modeling with continuous variables is reviewed. Trends in psychometric theory and data analysis across the five decades of publication ofPsychometrika are discussed, especially the clarification of concepts of population and sample, explication of the parametric structure of models, delineation of concepts of exploratory and confirmatory data analysis, expansion of statistical theory in psychometrics, estimation via optimization of an explicit objective function, and implementation of general function minimization methods. Developments in the ideas of factor analysis, latent variables, as well as structural and causal modeling are noted. Some major conceptual achievements involving general covariance structure representations, multiple population models, and moment structures are reviewed. The major statistical achievements of normal theory generalized least squares estimation, elliptical and distribution-free estimation, and higher-moment estimation are discussed. Computer programs that implement some of the theoretical developments are described.This review was supported in part by USPHS grants DA00017 and DA01070.  相似文献   

19.
Single-case designs are a class of repeated measures experiments used to evaluate the effects of interventions for small or specialized populations, such as individuals with low-incidence disabilities. There has been growing interest in systematic reviews and syntheses of evidence from single-case designs, but there remains a need to further develop appropriate statistical models and effect sizes for data from the designs. We propose a novel model for single-case data that exhibit nonlinear time trends created by an intervention that produces gradual effects, which build up and dissipate over time. The model expresses a structural relationship between a pattern of treatment assignment and an outcome variable, making it appropriate for both treatment reversal and multiple baseline designs. It is formulated as a generalized linear model so that it can be applied to outcomes measured as frequency counts or proportions, both of which are commonly used in single-case research, while providing readily interpretable effect size estimates such as log response ratios or log odds ratios. We demonstrate the gradual effects model by applying it to data from a single-case study and examine the performance of proposed estimation methods in a Monte Carlo simulation of frequency count data.  相似文献   

20.
A network model of logical and semantic structures from which speakers or writers generate linguistic messages at the discourse level is presented. While linguistic structures were considered in developing the model, the semantic and logical networks are defined without reference to linguistic structures and thus may be used to represent knowledge structures acquired from both linguistic and nonlinguistic sources. A second problem addressed is that of determining what logical and semantic information is acquired when a text is understood. To assess acquired knowledge, a procedure is presented for coding a subject's verbal reconstruction of knowledge acquired from a presented text (or other input) against the logical and semantic structure from which the text (or other input) was derived. The procedures are illustrated using data obtained from children who were asked to “retell” simple narrative stories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号