首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider a set of data consisting of measurements ofn objects with respect top variables displayed in ann ×p matrix. A monotone transformation of the values in each column, represented as a linear combination of integrated basis splines, is assumed determined by a linear combination of a new set of values characterizing each row object. Two different models are used: one, an Eckart-Young decomposition model, and the other, a multivariate normal model. Examples for artificial and real data are presented. The results indicate that both methods are helpful in choosing dimensionality and that the Eckart-Young model is also helpful in displaying the relationships among the objects and the variables. Also, results suggest that the resulting transformations are themselves illuminating.  相似文献   

2.
The paper is concerned with the testing of psycholinguistic hypotheses by the use of deductive reasoning tasks. After reviewing some of the problems of interpretation which have arisen with particular reference to conditional rules, an experiment is presented which measures comprehension and verification latencies in addition to response frequencies in a truth table evaluation task.The experiment tests a psycholinguistic hypothesis concerning the different usage of the logically equivalent forms of sentence: If p then q and p only if q with respect to the temporal order of the events p and q. It is proposed that the former sentence is more natural when the event p precedes the event q in time, and the latter more natural when the opposite temporal relation holds.Although significant support is found for the hypothesis in the analysis of the latency data, it is only distinguished from an alternative explanation by detailed analysis of response frequencies, thus indicating the general usefulness of the paradigm adopted.  相似文献   

3.
Technologies that measure human nonverbal behavior have existed for some time, and their use in the analysis of social behavior has become more popular following the development of sensor technologies that record full-body movement. However, a standardized methodology to efficiently represent and analyze full-body motion is absent. In this article, we present automated measurement and analysis of body motion (AMAB), a methodology for examining individual and interpersonal nonverbal behavior from the output of full-body motion tracking systems. We address the recording, screening, and normalization of the data, providing methods for standardizing the data across recording condition and across subject body sizes. We then propose a series of dependent measures to operationalize common research questions in psychological research. We present practical examples from several application areas to demonstrate the efficacy of our proposed method for full-body measurements and comparisons across time, space, body parts, and subjects.  相似文献   

4.
Newborns produce spontaneous movements during sleep that are functionally important for their future development. This nuance has been previously studied using animal models and more recently using movement data from sleep resting-state fMRI (rs-fMRI) scans. Age-related trajectory of statistical features of spontaneous movements of the head is under-examined. This study quantitatively mapped a developmental trajectory of spontaneous head movements during an rs-fMRI scan acquired during natural sleep in 91 datasets from healthy children from ∼birth to 3 years old, using the Open Science Infancy Research upcycling protocol. The youngest participants studied, 2–3 week-old neonates, showed increased noise-to-signal levels as well as lower symmetry features of their movements; noise-to-signal levels were attenuated and symmetry was increased in the older infants and toddlers (all Spearman's rank-order correlations, P < 0.05). Thus, statistical features of spontaneous head movements become more symmetrical and less noisy from birth to ∼3 years in children. Because spontaneous movements during sleep in early life may trigger new neuronal activity in the cortex, the key outstanding question for in vivo, non-invasive neuroimaging studies in young children is not “How can we correct head movement better?” but rather: How can we represent all important sources of neuronal activity that shape functional connections in the still-developing human central nervous system?  相似文献   

5.
In learning environments, understanding the longitudinal path of learning is one of the main goals. Cognitive diagnostic models (CDMs) for measurement combined with a transition model for mastery may be beneficial for providing fine-grained information about students’ knowledge profiles over time. An efficient algorithm to estimate model parameters would augment the practicality of this combination. In this study, the Expectation–Maximization (EM) algorithm is presented for the estimation of student learning trajectories with the GDINA (generalized deterministic inputs, noisy, “and” gate) and some of its submodels for the measurement component, and a first-order Markov model for learning transitions is implemented. A simulation study is conducted to investigate the efficiency of the algorithm in estimation accuracy of student and model parameters under several factors—sample size, number of attributes, number of time points in a test, and complexity of the measurement model. Attribute- and vector-level agreement rates as well as the root mean square error rates of the model parameters are investigated. In addition, the computer run times for converging are recorded. The result shows that for a majority of the conditions, the accuracy rates of the parameters are quite promising in conjunction with relatively short computation times. Only for the conditions with relatively low sample sizes and high numbers of attributes, the computation time increases with a reduction parameter recovery rate. An application using spatial reasoning data is given. Based on the Bayesian information criterion (BIC), the model fit analysis shows that the DINA (deterministic inputs, noisy, “and” gate) model is preferable to the GDINA with these data.  相似文献   

6.
Two reinforcement schedules were used to compare the predictive validity of a linear change model with a functional learning model. In one schedule, termed “convergent,” the linear change model predicts convergence to the optimum response, while in the other, termed “divergent,” this model predicts that a subject's response will not converge. The functional learning model predicts convergence in both cases. Another factor that was varied was presence or absence of random error or “noise” in the relationship between response and outcome. In the “noiseless” condition, in which no noise is added, a subject could discover the optimum response by chance, so that some subjects could appear to have converged fortuitously. In the “noisy” conditions such chance apparent convergence could not occur.The results did not unequivocally favor either model. While the linear change model's prediction of nonconvergence in the divergent conditions (particularly the “noisy” divergent condition) was not sustained, there was a clear difference in speed of convergence, counter to the prediction inferred from the functional learning model. Evidence that at least some subjects were utilizing a functional learning strategy was adduced from the fact that subjects were able to “map out” the relation between response and outcome quite accurately in a follow-up task. Almost all subjects in the “noisy” conditions had evidently “learned” a strong linear relation, with slope closely matching the veridical one.The data were consistent with a hybrid model assuming a “hierarchy of cognitive strategies” in which more complex strategies (e.g., functional learning) are utilized only when the simpler ones (e.g., a linear change strategy) fail to solve the problem.  相似文献   

7.
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.  相似文献   

8.
The paper discusses the mathematical foundations of a technique of multidimensional scaling, generalizing Guttman scaling, in which the structure of the embedding space relies only on ordinal concepts. An empirical relation is represented as an intersection of a minimal number (called bidimension) of Guttman relations. Fairly complete results are given for the cases of bidimensions 1 and 2. In the general case, the main results are based on the equivalence between the bidimension and the dimension of a certain partial order. A characterization of the bidimension as the chromatic number of some hypergraph is also provided.  相似文献   

9.
The accuracy with which people execute wrist and elbow movements were measured using three main conditions: (i) single-jointed (wrist or elbow) movements to targets, (ii) dual-jointed (wrist and elbow) movements to targets, and (iii) components of dual-jointed movements to targets, when the task for the subject was to perform the elbow or wrist constituent of the action in isolation, without displacing the second joint. Elbow precision was significantly worse under component than dual conditions, which is compatible with the notion that wrist and elbow activity are conjugately, rather than independently, programmed when a dual-jointed action is performed. The pattern of wrist accuracy was divergent, but possible reasons for this were discussed. In all cases, error was measured in terms of deviation from perfect posture; using this index, the hypothesis that incorporating more moving joints into an action serves to increase movement complexity and jeopardise precision was tested, but the results were ambiguous. Discussion also centered on the problems of using performance data to infer changes in motor programming, and the need for rigorous conceptualisation and research in this area.  相似文献   

10.
It is often believed that the aftereffect of visual movement (MAE) is more-or-less dependent on image movement. Modern explanation of MAE in terms of motion-sensitive mechanisms in the visual pathway assumes this. However, it has long been known that MAE can be influenced by other factors of stimulation, and particularly some that can be labeled asrelative. So, for example, MAE may not be observed unless more than one direction of movement is present in the eliciting stimulation, and MAE in an area may elicit an opposite MAE in an adjoining unadapted area. It is probable that the overemphasis on image movement has arisen because of the common use of multidirectional adapting movement and because of an assumption that patterned areas adjoining the MAE display do not have much effect on MAE. It is speculated that relative movement data for MAE may reflect a mechanism involved in detection of object motion: since image movement and eye movement do not in themselves adequately explain this process, it must be supposed that relative movement, in conjunction with the configuration of the retinal image, is important.  相似文献   

11.
When studying online movement adjustments, one of the interesting parameters is their latency. We set out to compare three different methods of determining the latency: the threshold, confidence interval, and extrapolation methods. We simulated sets of movements with different movement times and amplitudes of movement adjustments, all with the same known latency. We applied the three different methods in order to determine when the position, velocity, and acceleration of the adjusted movements started to deviate from the values for unperturbed movements. We did so both for averaged data and for the data of individual trials. We evaluated the methods on the basis of their accuracy and precision, and according to whether the latency was influenced by the intensity of the movement adjustment. The extrapolation method applied to average acceleration data gave the most reliable estimates of latency, according to these criteria.  相似文献   

12.
13.
Complex movement (CM) refers to the representation of a goal-oriented action and is classified as either transitive (use of tools) or intransitive (communication gestures). Both types of CM have three specific components: temporal, spatial, and content, which are subdivided into specific error types (SET). Since there is debate regarding the contribution of each brain hemisphere for the types of CM, our objective was to describe the brain lateralization of components and SET of transitive and intransitive CM. We studied 14 patients with a left hemisphere stroke (LH), 12 patients with a right hemisphere stroke (RH), and 16 control subjects. The Florida Apraxia Screening Test-Revised (FAST-R, Rothi et al., 1988) was used for the assessment of CM. Both clinical groups showed a worse performance than the control group on the total FAST-R and transitive movement scores (< 0.001). Failures in Spatial and Temporal components were found in both clinical groups, but only LH patients showed significantly more Content errors (< 0.01) than the control group. Also, only the LH group showed a higher number of errors for intransitive movements score (p = 0.017), due to lower scores in the content component, compared to the control group (= 0.04). Transitive and intransitive CMs differ in their neurocognitive representation; transitive CM shows a bilateral distribution of its components when compared to intransitive CM, which shows a preferential left hemisphere representation. This could result from higher neurocognitive demands for movements that require use of tools, compared with more automatic communication gestures.  相似文献   

14.
An important element of learning from examples is the extraction of patterns and regularities from data. This paper investigates the structure of patterns in data defined over discrete features, i.e. features with two or more qualitatively distinct values. Any such pattern can be algebraically decomposed into a spectrum of component patterns, each of which is a simpler or more atomic “regularity.” Each component regularity involves a certain number of features, referred to as its degree. Regularities of lower degree represent simpler or more coarse patterns in the original pattern, while regularities of higher degree represent finer or more idiosyncratic patterns. The full spectral breakdown of a pattern into component regularities of minimal degree, referred to as its power series, expresses the original pattern in terms of the regular rules or patterns it obeys, amounting to a kind of “theory” of the pattern. The number of regularities at various degrees necessary to represent the pattern is tabulated in its power spectrum, which expresses how much of a pattern's structure can be explained by regularities of various levels of complexity. A weighted mean of the pattern's spectral power gives a useful numeric summary of its overall complexity, called its algebraic complexity. The basic theory of algebraic decomposition is extended in several ways, including algebraic accounts of the typicality of individual objects within concepts, and estimation of the power series from noisy data. Finally some relations between these algebraic quantities and empirical data are discussed.  相似文献   

15.
Psychophysical studies with infants or with patients often are unable to use pilot data, training, or large numbers of trials. To evaluate threshold estimates under these conditions, computer simulations of experiments with small numbers of trials were performed by using psychometric functions based on a model of two types of noise:stimulus-related noise (affecting slope) andextraneous noise (affecting upper asymptote). Threshold estimates were biased and imprecise when extraneous noise was high, as were the estimates of extraneous noise. Strategies were developed for rejecting data sets as too noisy for unbiased and precise threshold estimation; these strategies were most successful when extraneous noise was low for most of the data sets. An analysis of 1,026 data sets from visual function tests of infants and toddlers showed that extraneous noise is often considerable, that experimental paradigms can be developed that minimize extraneous noise, and that data analysis that does not consider the effects of extraneous noise may underestimate test-retest reliability and overestimate interocular differences.  相似文献   

16.
Nonparametric regression techniques, which estimate functions directly from noisy data rather than relying on specific parametric models, now play a central role in statistical analysis. We can improve the efficiency and other aspects of a nonparametric curve estimate by using prior knowledge about general features of the curve in the smoothing process. Spline smoothing is extended in this paper to express this prior knowledge in the form of a linear differential operator that annihilates a specified parametric model for the data. Roughness in the fitted function is defined in terms of the integrated square of this operator applied to the fitted function. A fastO(n) algorithm is outlined for this smart smoothing process. Illustrations are provided of where this technique proves useful.  相似文献   

17.
A sample of 315 valid Minnesota Multiphasic Personality Inventory (MMPI) protocols were selected from inpatient files and scored for both the MMPI-168 and Faschingbauer Abbreviated MMPI short forms. Each short form was then factor-analyzed by a principal axis strategy with varimax rotation. The six factors extracted from each short form were then compared as to their similarity by use of the s index. This procedure showed five of the six factors in each short form as having a significant relationship of the pattern of salient variables they had in common with the complimentary factors of the other form. These data suggest that both short forms, though based on different construction methodologies, share the same underlying personality attributes. Future research is suggested to replicate these results and extend them to the full MMPI.  相似文献   

18.
The Coombs and Huang (1970) distributive theory of perceived risk is reinterpreted as a more robust statistical hypothesis to describe central tendencies of noisy replicates drawn from a homogeneous population. Barron's (1976) sample of 13 business faculty rank-order responses are pooled to obtain a replicated complete 3 × 3 × 3 design which is analyzed by a new stochastic conjoint measurement (SCJM) approach to axiomatic data analysis. SCJM implements statistical analogues of the deterministic Krantz and Tversky (1971) diagnostics for error-free data. SCJM diagnosis based on a series of one-sided nonparametric two-cell comparisons at the α = 0.04 level supports the hypothesis of interaction between the expected-value and number-of-plays attributes of gambles yet contradicts Barron's odd-even effects hypothesis. SCJM diagnosis with two-cell α < 0.04 supports an additive statistical model.  相似文献   

19.
The goal of this experiment was to test a potentially useful nonlinear method for smoothing noisy position data, which often is encountered in the analysis of data. This algorithm (7RY) uses a nonlinear smoothing function and behaves like a low-pass filter, automatically removing aberrant points; it is used prior to differentiation of time series so that usable acceleration information can be obtained. The experimental procedure comprises position data collection along with direct accelerometric data recording. From the position-time data, (a) 7RY and (b) Butterworth algorithms have been used to compute twice-differentiated acceleration curves. The directly recorded acceleration measurements were then compared with the acceleration computed from the original position data. Although the results indicated an overall good fit between the recorded and the calculated acceleration curves, only the nonlinear method led to reliable acceleration curves when aberrant points were present in the position data.  相似文献   

20.
The Marshak bid procedure shows that more money is required to induce a S to exchange gamble a for gamble e and then e for b if e differs from a only in the winning amount and differs from b only in the probability of winning, rather than if e differs from a only in the probability of winning. This is contrary to most theories of risky decision making which imply that the amount of money necessary to effect a 2-step exchange between a and b should be independent of the intermediary gamble. One might attempt to explain the effect by saying that the S attends to the dimension which is different between gambles. But the explanation is untenable if one assumes that states of attention are defined as weightings of the dimensions. An alternative explanation is put forward which basically assumes that winning amounts mask differences in probability of winning more than vice versa. The formalization of the theory is given in terms of Fechnerian integration over imperfect differentials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号