首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This research investigated the effect on power spectra when data-smoothing functions were used on EEG data prior to submitting them to a FFT. We used two smoothing function options: not using any smoothing function and using the Parzen smoothing function. We developed a program to evaluate each of these functions with real and standard data. When a set of data is submitted to smoothing prior to being submitted to a FFT, there are statistically significant differences in the power spectra obtained from the FFT. This finding holds true for standard waveforms as well as for real EEG data.  相似文献   

2.
A′ was identified by Pollack and Norman (1964, p. 126) as “the average of the areas subtended by the upper, and by the lower, bounds” of the isosensitivity curve. That estimate of the area under the isosensitivity curve is used when only one point on the curve is available, as with yes/no data. All estimates of area when more than one point is available, such as with confidence rating data, have settled for using the value of the lower bound. A general procedure, and an example of the calculations, is presented for calculatingAr, the average of the minimum and maximum areas subtended by multipoint data. Data from a recognition-memory experiment are analyzed to show the extent to which the use ofAr improves the accuracy and reduces the uncertainty surrounding the estimate of area under the isosensitivity curve.  相似文献   

3.
There are two opposing models with regard to the function of memory in visual search: a memorydriven model and a memory-free model. Recently, Horowitz and Wolfe (2001) investigated a multipletarget search task. Participants were required to decide whether or not there were at leastn targets present. They demonstrated that the reaction time ×n function has a positive and accelerated curve. They argued that the memory-free model predicts this curve, whereas the memory-driven model predicts a linear function. In this study, I varied the total set sizes of a multiple-target search task and fitted the models separately for eachn condition. The model fit indicated that the memory-driven model is more appropriate than the memory-free model in eachn condition. These results suggest that an amnesic process does not cause the positive accelerated curve of the reaction time ×n function but that it is the result of the time needed to examine each additionaln item.  相似文献   

4.
The detectability of forms embedded within random visual noise has been found to be predictable from the autocorrelation transform of the stimulus pattern (Uttal, 1975). A basic assumption in the autocorrelation theory of form detection is that detectability is determined by the organization of the stimulus pattern, irrespective of the observer’s prior knowledge or expectations about the characteristics of the form. This assumption was tested by determining the effect of the size of the set of alternative target forms on performance of a forced-choice detection task. The targets were composed of dots in a straight line, appearing in one of a specified set of 2, 4, or 8 alternative positions in a pattern of randomly distributed masking dots. Detection accuracy was found to decrease as set size increased, but this decrease was close to what was predicted on the assumption that the random background was independently confusable with the target at each of the alternative positions. Thus, prior knowledge of the set of alternative targets appeared to have no effect on thevisual process, but only on thedecision process by virtue of the features that were relevant criteria for deciding which of the two patterns on each trial was most likely to contain the target. This result is consistent with the autocorrelation theory. This experiment may illustrate how the decision process has influenced the performance in many other experiments that have been assumed to demonstrate an effect of prior knowledge on perception.  相似文献   

5.
The goal of this experiment was to test a potentially useful nonlinear method for smoothing noisy position data, which often is encountered in the analysis of data. This algorithm (7RY) uses a nonlinear smoothing function and behaves like a low-pass filter, automatically removing aberrant points; it is used prior to differentiation of time series so that usable acceleration information can be obtained. The experimental procedure comprises position data collection along with direct accelerometric data recording. From the position-time data, (a) 7RY and (b) Butterworth algorithms have been used to compute twice-differentiated acceleration curves. The directly recorded acceleration measurements were then compared with the acceleration computed from the original position data. Although the results indicated an overall good fit between the recorded and the calculated acceleration curves, only the nonlinear method led to reliable acceleration curves when aberrant points were present in the position data.  相似文献   

6.
Medical research has extensively dealt with the estimation of the accuracy (sensitivity and specificity) of a diagnostic test for screening individuals. In this paper we apply the biometric latent class model with random effects by Qu, Tan, and Kutner [(1996). Random effects models in latent class analysis for evaluating accuracy of diagnostic tests. Biometrics, 52, 797-810] to estimate the response error (careless error and lucky guess) probabilities for dichotomous test items in the psychometric theory of knowledge spaces. The approach is illustrated with simulated data. In particular, we extend this approach to give a generalization of the basic local independence model in knowledge space theory. This allows for local dependence among the indicators given the knowledge state of an examinee and/or for the incorporation of covariates.  相似文献   

7.
Many models offer different explanations of learning processes, some of them predicting equal learning rates between conditions. The simplest method by which to assess this equality is to evaluate the curvature parameter for each condition, followed by a statistical test. However, this approach is highly dependent on the fitting procedure, which may come with built-in biases difficult to identify. Averaging the data per block of training would help reduce the noise present in the trial data, but averaging introduces a severe distortion on the curve, which can no longer be fitted by the original function. In this article, we first demonstrate what is the distortion resulting from block averaging. Theblock average learning function, once known, can be used to extract parameters when the performance is averaged over blocks or sessions. The use of averages eliminates an important part of the noise present in the data and allows good recovery of the learning curve parameters. Equality of curvatures can be tested with a test of linear hypothesis. This method can be performed on trial data or block average data, but it is more powerful with block average data.  相似文献   

8.
Background. Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in supporting prior knowledge activation if available prior knowledge is limited. Aims. This study investigates the effects of the retrieval‐directed function of note taking depending on learners' level of prior knowledge. It is hypothesized that the effectiveness of note taking is influenced by the amount of prior knowledge learners already possess. Sample. Sixty‐one high school students participated in this study. A prior knowledge test was used to ascertain differences in level of prior knowledge and assign participants to a low or a high prior knowledge group. Method. A 2×2 factorial design was used to investigate the effects of note taking during prior knowledge activation (yes, no) depending on learners' level of prior knowledge (low, high) on mental effort, performance, and mental efficiency. Results. Note taking during prior knowledge activation lowered mental effort and increased mental efficiency for high prior knowledge learners. For low prior knowledge learners, note taking had the opposite effect on mental effort and mental efficiency. Conclusions. The effects of the retrieval‐directed function of note taking are influenced by learners' level of prior knowledge. Learners with high prior knowledge benefit from taking notes while activating prior knowledge, whereas note taking has no beneficial effects for learners with limited prior knowledge.  相似文献   

9.
Previous research has shown that the number of words cumulatively recalled (N) at time (t) is a negatively accelerated function that reaches an asymptote as t → ∞. Research has also shown that the increase in N with t occurs in bursts or clusters. Several models purport to account for this cumulative recall curve in terms of cluster characteristics. The present research shows that previous models have not in fact successfully linked continuous recall to cluster characteristics. This research demonstrates that cluster models need to employ three empirical characteristics of clusters: Tb, the time between clusters; Tw, the average time between words within a cluster; and Wc, the number of words within a cluster. It is shown that these three quantities determine the cumulative recall curve, and these three quantities may in turn be characterized by four parameters. Of these four parameters, only three actually characterize the cumulative recall curve. Two parameters determine the initial slope and final asymptote of the curve, while a third parameter, which we introduce for the first time, characterizes the curve's shape. This latter parameter may be interpreted as the ratio ofthe time spent in retrieving and discarding a cluster that has been previously recalled to the amount of time spent in retrieving and outputting a newly encountered cluster. It is pointed out that previous success in fitting the cumulative recall data with a two-parameter function may be explained by the fact that this parameter lies in a restricted range about unity. Further experimental work is suggested to elucidate the behavior of this new parameter. Two models are then proposed to account for these characteristics of clusters and the shape of the recall curve.  相似文献   

10.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

11.
A two-term function with potential and exponential terms is proposed to model the body temperature throughout the day. These terms can be related to thermogenic and thermolytic processes that are opposite but simultaneous. The first term, a potential function, reflects that the heat gain process is predominant; the second term is a negative exponential function and reflects the predominance of heat dissipation. The function also provides the time of the inflection between thermal opposite processes. The proposed two-term function and a cosine function were fitted to 335 temperature series of 24-h duration recorded in humans and animals. The percentage of variance accounted for by the two-term function was higher than that for the cosine approximation for the total sample and for those series whose shape was asymmetrical (n=75). Nevertheless, no significant differences between the percentage of variance accounted for by the two functions were found for those series whose shape was symmetrical (n=260). The results of different subgroups that differed in experimental conditions are discussed.  相似文献   

12.
Intuitions are often considered suboptimal because they can bias people's thinking. The bat-and-ball problem is a celebrated example of this potentially detrimental aspect of intuitions since it elicits a very appealing and prepotent intuitive but incorrect response. We propose to show that certain kinds of intuitions (i.e., prior beliefs) can help people to reason better on this task. In two experiments, participants answered either a classic congruent version of the bat-and-ball problem in which the intuitively cued response fitted with prior knowledge (i.e., was believable) or a modified incongruent version in which the intuitively cued response conflicted with prior knowledge (i.e., was unbelievable). Results indicate that participants who solved the modified unbelievable version performed better than participants who solved the classic believable version. Our data highlight that prior beliefs, even in the bat-and-ball problem, can accidentally make people perform better, probably because they encourage them to adopt a more effortful processing strategy.  相似文献   

13.
14.
There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.  相似文献   

15.
Previous research has demonstrated that, when given feedback, participants are more likely to correct confidently-held errors, as compared with errors held with lower levels of confidence, a finding termed the hypercorrection effect. Accounts of hypercorrection suggest that confidence modifies attention to feedback; alternatively, hypercorrection may reflect prior domain knowledge, with confidence ratings simply correlated with this prior knowledge. In the present experiments, we attempted to adjudicate among these explanations of the hypercorrection effect. In Experiments 1a and 1b, participants answered general knowledge questions, rated their confidence, and received feedback either immediately after rating their confidence or after a delay of several minutes. Although memory for confidence judgments should have been poorer at a delay, the hypercorrection effect was equivalent for both feedback timings. Experiment 2 showed that hypercorrection remained unchanged even when the delay to feedback was increased. In addition, measures of recall for prior confidence judgments showed that memory for confidence was indeed poorer after a delay. Experiment 3 directly compared estimates of domain knowledge with confidence ratings, showing that such prior knowledge was related to error correction, whereas the unique role of confidence was small. Overall, our results suggest that prior knowledge likely plays a primary role in error correction, while confidence may play a small role or merely serve as a proxy for prior knowledge.  相似文献   

16.
The primary objective of this study was to quantitatively investigate the human perception of surface curvature by using virtual surfaces and motor tasks along with data analysis methods to estimate surface curvature from drawing movements. Three psychophysical experiments were conducted. In Experiment 1, we looked at subjects’ sensitivity to the curvature of a curve lying on a surface and changes in the curvature as defined byEuler’s formula, which relates maximum and minimum principal curvatures and their directions. Regardless of direction and surface shape (elliptic and hyperbolic), subjects could report the curvature of a curve lying on a surface through a drawing task. In addition, multiple curves drawn by subjects were used to reconstruct the surface. These reconstructed surfaces could be better accounted for by analysis that treated the drawing data as a set of curvatures rather than as a set of depths. A pointing task was utilized in Experiment 2, and subjects could report principal curvature directions of a surface rather precisely and consistently when the difference between principal curvatures was sufficiently large, but performance was poor for the direction of zero curvature (asymptotic direction) on a hyperbolic surface. In Experiment 3, it was discovered that sensitivity to the sign of curvature was different for perceptual judgments and motor responses, and there was also a difference for that of a curve itself and the same curve embedded in a surface. These findings suggest that humans are sensitive to relative changes in curvature and are able to comprehend quantitative surface curvature for some motor tasks.  相似文献   

17.
Consistent physical activity is key for health and well-being, but it is vulnerable to stressors. The process of recovering from such stressors and bouncing back to the previous state of physical activity can be referred to as resilience. Quantifying resilience is fundamental to assess and manage the impact of stressors on consistent physical activity. In this tutorial, we present a method to quantify the resilience process from physical activity data. We leverage the prior operationalization of resilience, as used in various psychological domains, as area under the curve and expand it to suit the characteristics of physical activity time series. As use case to illustrate the methodology, we quantified resilience in step count time series (length = 366 observations) for eight participants following the first COVID-19 lockdown as a stressor. Steps were assessed daily using wrist-worn devices. The methodology is implemented in R and all coding details are included. For each person’s time series, we fitted multiple growth models and identified the best one using the Root Mean Squared Error (RMSE). Then, we used the predicted values from the selected model to identify the point in time when the participant recovered from the stressor and quantified the resulting area under the curve as a measure of resilience for step count. Further resilience features were extracted to capture the different aspects of the process. By developing a methodological guide with a step-by-step implementation, we aimed at fostering increased awareness about the concept of resilience for physical activity and facilitate the implementation of related research.  相似文献   

18.
Single letters were presented for varying numbers of repeated brief exposures. The S reported on the target after each presentation, identifying only those symbols perceived with certainty. A d’ analysis of the results revealed that target-uncertainty reduction produced significant facilitation in the average level of perceptual sensitivity only in a condition in which the target symbol was exposed prior to each trial. Prior knowledge of the target symbol without prior exposure produced a measurable, but nonsignificant, facilitation in sensitivity. The data suggested that the growth curve normally associated with the repeated-presentations paradigm may be the result of a progressive liberalization of the S’s criterion for reporting in some studies. The data further showed that prior exposure and perhaps prior knowledge can significantly alter the shape of the repetition curve when the target is a single letter. These findings were interpreted as an indication that both amount and kind of target-uncertainty reduction can produce significant changes in the repetition effect for single letters.  相似文献   

19.
20.
What Do the Data Tell Us? Justification of scientific theories is a three-place relation between data, theories, and background knowledge. Though this should be a commonplace, many methodologies in science neglect it. The article will elucidate the significance and function of our background knowledge in epistemic justification and their consequences for different scientific methodologies. It is argued that there is no simple and at the same time acceptable statistical algorithm that justifies a given theory merely on the basis of certain data. And even if we think to know the probability of a theory, that does not decide whether we should accept it or not.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号