首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Psychology principally utilizes nomothetic, interindividual approaches to model phenomena of interest. However, it is the case that these approaches do not always capture the processes for each individual in the sample. If the research is focused on individual processes, confining analysis to the idiographic level may be more appropriate. One way to overcome the nomothetic inability to capture idiographic processes is to identify those participants who meet the criteria of ergodicity and restrict analysis to the resulting sample. Under these conditions it is quantitatively justifiable to create a group model without concern that it may fail to represent each member's idiographic process. In this study we explore the utility of such a method by (a) applying an ergodic pooling test to a sample of dyads (N = 128) who provided daily (T = 50) self-reports of affect, (b) applying an ergodic pooling test to samples (N = 4) of simulated ergodic time series data (T = 50, 250, and 1,000), (c) modeling dyads and simulated subgroups identified as ergodic, and (d) comparing the results from a model specified at the group level with those from models specified at the individual level.  相似文献   

2.
Abstract

The dependence of the normal-state resistivity, resistive superconducting transition (T c, ΔT c), and of the upper critical-field slope (dH c2/dT|T=Tc) on density has been investigated for several YBa2Cu3O9-x(x?2.1) samples. The resistivity decreases rapidly with increasing density, whereas T c and ΔT c are rather insensitive to a change in density. dH c2/dT|Tc depends sensitively on the preparation conditions. The implications of these results both for the evaluation of the parameters relevant to the understanding of the nature of superconductivity and for the technological applications of granular superconductors are briefly discussed.  相似文献   

3.
Dialogue has its origins in joint activities, which it serves to coordinate. Joint activities, in turn, usually emerge in hierarchically nested projects and subprojects. We propose that participants use dialogue to coordinate two kinds of transitions in these joint projects: vertical transitions, or entering and exiting joint projects; and horizontal transitions, or continuing within joint projects. The participants help signal these transitions with project markers, words such as uh‐huh, m‐hm, yeah, okay, or all right. These words have been studied mainly as signals of listener feedback (back‐channel signals) or turn‐taking devices (acknowledgment tokens). We present evidence from several types of well‐defined tasks that they are also part of a system of contrasts specialized for navigating joint projects. Uh‐huh, m‐hm and yeah are used for horizontal transitions, and okay and all right for vertical transitions.  相似文献   

4.
The paper is an attempt to make sense of Hegel's notion of aufheben. The double meaning of aufheben and its alleged ‘rise above the mere “either‐or”; of understanding’ have been taken, by some, to constitute a criticism of the logic of either‐or. It is argued, on the contrary, that Hegel's notion of aufheben, explicated in its primary and philosophical context, turns out to be a substantiation of that logic. The intelligibility of the formula of either‐or depends, for example, on the categories of Being and Not‐Being. But if these categories are regarded as particular finite determinations themselves subject to the formula of either‐or, then the formula, far from being intelligible, ‘falls apart’. Hegel is arguing, in other words, that if we are to substantiate the logic of either‐or, we must, at the same time, ‘rise above’ that logic. The role of aufheben is then considered in the special sciences. Here it is argued that we must distinguish between empirical transitions, governed by the finite determinations of things, and logical or dialectical transitions, governed by considerations of the intelligibility of the notions involved. Applying the notion of aufheben to the former transitions suggests wrongly that empirical transitions have an objective or philosophic necessity. Finally, the place of ‘immanent transformation’ in the context of aufheben is examined. It is concluded that if there is to be a transformation, then a distinction must be drawn between thought and its content, but then the transformation cannot be regarded as immanent.  相似文献   

5.
The particle-size distribution in silica powder prepared by the sol–gel method has been determined by dynamic light scattering analysis. The average diameter of the particles was found to be 250?nm. Using a low-temperature nitrogen adsorption–desorption technique, it was found that the synthesised powder may be referred to as mesoporous materials. Polystyrene/silica composite films were fabricated by casting from o-xylene solutions. It was found, using thermogravimetry, that incorporation of silica leads to an increase in both the onset temperature of polymer degradation and in the temperature at which the maximum rate of weight loss occurs. Using differential scanning calorimetry, the phase transitions from the glassy state to the elastic one were studied for the polymeric materials. New data relating to the effect of silica on the glass-transition temperature, Tg, of composites with a low weight fraction of SiO2 were found. Specifically, we found a non-monotonic concentration dependence of the value of Tg. The present results advocate for employing silica as an effective filler for producing polymer composites with enhanced thermal properties.  相似文献   

6.
This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.  相似文献   

7.
Abstract

Survey data often contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. However, conventional SEM methods are not crafted to handle data with a large number of variables (p). A large p can cause Tml, the most widely used likelihood ratio statistic, to depart drastically from the assumed chi-square distribution even with normally distributed data and a relatively large sample size N. A key element affecting this behavior of Tml is its mean bias. The focus of this article is to determine the cause of the bias. To this end, empirical means of Tml via Monte Carlo simulation are used to obtain the empirical bias. The most effective predictors of the mean bias are subsequently identified and their predictive utility examined. The results are further used to predict type I errors of Tml. The article also illustrates how to use the obtained results to determine the required sample size for Tml to behave reasonably well. A real data example is presented to show the effect of the mean bias on model inference as well as how to correct the bias in practice.  相似文献   

8.
The pseudodiagnosticity task has been used as an example of the tendency on the part of participants to incorrectly assess Bayesian constraints in assessing data, and as a failure to consider alternative hypotheses in a probabilistic inference task. In the task, participants are given one value, the anchor value, corresponding to P(D1|H) and may choose one other value, either P(D1|¬!H), P(D2|H), or P(D2|not;!H). Most participants select P(D2|H), or P(D2|¬!H) which have been considered inappropriate (and called pseudodiagnostic) because only P(D1|¬!H) allows use of Bayes' theorem. We present a new analysis based on probability intervals and show that selection of either P(D2|H), or P(D2|¬!H) is in fact pseudodiagnostic, whereas choice of P(D1|¬!H) is diagnostic. Our analysis shows that choice of the pseudodiagnostic values actually increases uncertainty regarding the posterior probability of H, supporting the original interpretation of the experimental findings on the pseudodiagnosticity task. The argument illuminates the general proposition that evolutionarily adaptive heuristics for Bayesian inference can be misled in some task situations.  相似文献   

9.
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure‐Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity‐based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large‐scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n2log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large‐scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before.  相似文献   

10.
Abstract

Vortex dynamics in La1.86Sr0.14CuO4 have been studied by the measurement of ρc //i (T, H), where ρc //i is the c-axis resistivity for H//i (i = c or a-b). We argue that, at temperatures higher than the irreversibility temperature T irr, the usual vortex picture breaks down owing to the thermal motion of vortices, resulting in a T- and T in-dependent anisotropic parameter γ. After taking into account the dependence of γ on T and T irr, we show that at each given temperature we can rescale the ρc //a-b (T, H) data onto the corresponding ρc //c (T, H) curves. This scaling property clearly indicates that the Lorentz-force-free mechanism is responsible for ρc //a-b (T, H). Furthermore, we also show that the measured ρc //a-b (T, H) data can be explained in terms of the recently developed extended Josephson coupling model which is verified by rescaling ρc //a-b (T) data for various fields onto a single curve.  相似文献   

11.
We examined the occurrence of faking on a rating situational judgment test (SJT) by comparing SJT scores and response styles of the same individuals across two naturally occurring situations. An SJT for medical school selection was administered twice to the same group of applicants (N = 317) under low‐stakes (T1) and high‐stakes (T2) circumstances. The SJT was scored using three different methods that were differentially affected by response tendencies. Applicants used significantly more extreme responding on T2 than T1. Faking (higher SJT score on T2) was only observed for scoring methods that controlled for response tendencies. Scoring methods that do not control for response tendencies introduce systematic error into the SJT score, which may lead to inaccurate conclusions about the existence of faking.  相似文献   

12.
Regular dynamic logic is extended by the program construct, meaning and executed in parallel. In a semantics due to Peleg, each command is interpreted as a set of pairs (s,T), withT being the set of states reachable froms by a single execution of, possibly involving several processes acting in parallel. The modalities << and [] are given the interpretations<>A is true ats iff there existsT withsRT andA true throughoutT, and[]A is true ats iff for allT, ifsRT thenA is true throughoutT, which make <> and [] no longer interdefinable via negation, as they are in the regular case.We prove that the logic defined by this modelling is finitely axiomatisable and has the finite model property, hence is decidable. This requires the development a new theory of canonical models and filtrations for reachability relations.  相似文献   

13.
Phoneme labeling and discrimination experiments were conducted with a continuum of voiced stops produced by a Terminal Analog Speech Synthesizer. The stops ranged from |b| to |d|. Only second formant (F2) transitions changed from one sound to another. (A formant is energy concentrated in a narrow frequency range.) In the labeling experiment conducted to locate the phoneme boundary, subjects identified the individual stimuli as |b| and |d|. In discrimination, difference and identity pairs were presented, with alternative responses of same and different. This allows separate consideration of discrimination (different/Different) and recognition (same/Identity) hits, and also analysis of the data in accordance with the theory of signal detectibility. The sounds were discriminated with and without F 1 and F 3 which contained no discriminatory information, but are responsible for perceived similarity to speech. With F 1 F 3 , sensitivity (d) was highest at the |b-d| boundary, but without F 1 F 3 this was not true. Spectral analysis of the sounds both with and without F 1 F 3 revealed a phonemic energy discontinuity for the 1/3 octave around the F 2 steady-state frequency (1250 Hz). It therefore seems probable that subjects listened to frequencies which contained phonemic information when F 1 F 3 were included, but not when they were omitted. In spite of the high sensitivity at the |b-d| boundary, recognition hits (same /Identity) were lowest the boundary had to sound less like a difference to be called different than a pair away from the boundary.Indications, then, are quite strong that auditory-frequency selection helps the perception of speech, and it is clear that a strategy of criterion lowering helps it.  相似文献   

14.
Abstract

This paper focuses on shifts in perceived job content among two occupational panels (office technology workers and machine operators) during the first year of their professional career. A shortened version of Fine's Functional Job Analysis is used to measure perceived job content, and at each of the two time stages (i.e. stage T1 and stage T2) the data related to the three domains of People, Things, and Data are optimally scaled and then recoded to a categorical indicator of job content complexity. Shifts in perceived job content are studied by application of log-linear analysis to the multidimensional contingency table obtained by the cross-tabulation of the T1 and the T2 data. The results indicate that there is no general progress or decline in job content activities over time, except for the Things domain. The findings also suggest that there is a substantial symmetry between job progression and degression, and that the bulk of content switches is to adjacent levels of complexity.  相似文献   

15.
Eric Barnes 《Erkenntnis》1996,45(1):69-89
Predictivism holds that, where evidence E confirms theory T, E confirms T more strongly when E is predicted on the basis of T and subsequently confirmed than when E is known in advance of T's formulation and used, in some sense, in the formulation of T. Predictivism has lately enjoyed some strong supporting arguments from Maher (1988, 1990, 1993) and Kahn, Landsberg, and Stockman (1992). Despite the many virtues of the analyses these authors provide it is my view that they (along with all other authors on this subject) have failed to understand a fundamental truth about predictivism: the existence of a scientist who predicted T prior to the establishment that E is true has epistemic import for T (once E is established) only in connection with information regarding the social milieu in which the T-predictor is located and information regarding how the T-predictor was located. The aim of this paper is to show that predictivism is ultimately a social phenomenon that requires a social level of analysis, a thesis I deem social predictivism.For comments and criticisms I am indebted to Doug Ehring, Mark Heller, Jean Kazez, Patrick Maher, and Alastair Noreross. Special thanks are due to Wayne Woodword for help with the proof in Section 7.  相似文献   

16.
Survey data often contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. With typical nonnormally distributed data in practice, a rescaled statistic Trml proposed by Satorra and Bentler was recommended in the literature of SEM. However, Trml has been shown to be problematic when the sample size N is small and/or the number of variables p is large. There does not exist a reliable test statistic for SEM with small N or large p, especially with nonnormally distributed data. Following the principle of Bartlett correction, this article develops empirical corrections to Trml so that the mean of the empirically corrected statistics approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics control type I errors reasonably well even when N is smaller than 2p, where Trml may reject the correct model 100% even for normally distributed data. The application of the empirically corrected statistics is illustrated via a real data example.  相似文献   

17.
In applications of item response theory, assessment of model fit is a critical issue. Recently, limited‐information goodness‐of‐fit testing has received increased attention in the psychometrics literature. In contrast to full‐information test statistics such as Pearson’s X2 or the likelihood ratio G2, these limited‐information tests utilize lower‐order marginal tables rather than the full contingency table. A notable example is Maydeu‐Olivares and colleagues’M2 family of statistics based on univariate and bivariate margins. When the contingency table is sparse, tests based on M2 retain better Type I error rate control than the full‐information tests and can be more powerful. While in principle the M2 statistic can be extended to test hierarchical multidimensional item factor models (e.g., bifactor and testlet models), the computation is non‐trivial. To obtain M2, a researcher often has to obtain (many thousands of) marginal probabilities, derivatives, and weights. Each of these must be approximated with high‐dimensional numerical integration. We propose a dimension reduction method that can take advantage of the hierarchical factor structure so that the integrals can be approximated far more efficiently. We also propose a new test statistic that can be substantially better calibrated and more powerful than the original M2 statistic when the test is long and the items are polytomous. We use simulations to demonstrate the performance of our new methods and illustrate their effectiveness with applications to real data.  相似文献   

18.
Objective: Compensatory health beliefs (CHBs), defined as beliefs that healthy behaviours can compensate for unhealthy behaviours, may be one possible factor hindering people in adopting a healthier lifestyle. This study examined the contribution of CHBs to the prediction of adolescents’ physical activity within the theoretical framework of the Health Action Process Approach (HAPA).

Design: The study followed a prospective survey design with assessments at baseline (T1) and two weeks later (T2).

Method: Questionnaire data on physical activity, HAPA variables and CHBs were obtained twice from 430 adolescents of four different Swiss schools. Multilevel modelling was applied.

Results: CHBs added significantly to the prediction of intentions and change in intentions, in that higher CHBs were associated with lower intentions to be physically active at T2 and a reduction in intentions from T1 to T2. No effect of CHBs emerged for the prediction of self-reported levels of physical activity at T2 and change in physical activity from T1 to T2.

Conclusion: Findings emphasise the relevance of examining CHBs in the context of an established health behaviour change model and suggest that CHBs are of particular importance in the process of intention formation.  相似文献   

19.
In analytic hierarchy process (AHP), a ratio scale (π1, π2, ⋯, πt) for the priorities of the alternatives {T1, T2, ⋯, Tt} is used for a decision problem in which πi/πj is used to quantify the ratio of the priority of Ti to that of Tj. In a group decision‐making setup, the subjective estimates of πi/πj are obtained as entries of a pairwise comparison matrix for each member of the group. On the basis of these pairwise comparison matrices, one of the topics of interest in some situation is the total rank ordering of the priorities of the alternatives. In this article, a statistical method is proposed for testing a specific total rank ordering of the priorities of the alternatives. The method developed is then illustrated using numerical examples. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
This simulation study investigates the performance of three test statistics, T1, T2, and T3, used to evaluate structural equation model fit under non normal data conditions. T1 is the well-known mean-adjusted statistic of Satorra and Bentler. T2 is the mean-and-variance adjusted statistic of Sattertwaithe type where the degrees of freedom is manipulated. T3 is a recently proposed version of T2 that does not manipulate degrees of freedom. Discrepancies between these statistics and their nominal chi-square distribution in terms of errors of Type I and Type II are investigated. All statistics are shown to be sensitive to increasing kurtosis in the data, with Type I error rates often far off the nominal level. Under excess kurtosis true models are generally over-rejected by T1 and under-rejected by T2 and T3, which have similar performance in all conditions. Under misspecification there is a loss of power with increasing kurtosis, especially for T2 and T3. The coefficient of variation of the nonzero eigenvalues of a certain matrix is shown to be a reliable indicator for the adequacy of these statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号