首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a nontechnical, conceptually oriented introduction to wavelet analysis and its application to neuroelectric waveforms such as the EEG and event related potentials (ERP). Wavelet analysis refers to a growing class of signal processing techniques and transforms that use wavelets and wavelet packets to decompose and manipulate time-varying, nonstationary signals. Neuroelectric waveforms fall into this category of signals because they typically have frequency content that varies as a function of time and recording site. Wavelet techniques can optimize the analysis of such signals by providing excellent joint time-frequency resolution. The ability of wavelet analysis to accurately resolve neuroelectric waveforms into specific time and frequency components leads to several analysis applications. Some of these applications are time-varying filtering for denoising single trial ERPs, EEG spike and spindle detection, ERP component separation and measurement, hearing-threshold estimation via auditory brainstem evoked response measurements, isolation of specific EEG and ERP rhythms, scale-specific topographic analysis, and dense-sensor array data compression. The present tutorial describes the basic concepts of wavelet analysis that underlie these and other applications. In addition, the application of a recently developed method of custom designing Meyer wavelets to match the waveshapes of particular neuroelectric waveforms is illustrated. Matched wavelets are physiologically sensible pattern analyzers for EEG and ERP waveforms and their superior performance is illustrated with real data examples.  相似文献   

2.
A wide variety of complex waveforms can be generated by approximating the desired analog waveform from an array of digital values. Some basic properties of these digital approximations are discussed in terms of pulse amplitude modulation and sampling theory. The waveforms are generated by transferring the digital values to a digital-to-analog converter followed by a low-pass filter. This usually requires the dedicated use of a computer. We have built a device, incorporating solid state memory, that can store, time, and transfer previously computed digital values, so that a computer is no longer necessary to generate the waveforms. Specifications of the digital-to-analog converter and appropriate settings of the filter are discussed, along with a simplified procedure for calculating waveforms that have line spectra. An adaptation of this procedure enables the device to be used as a high-speed programmable pure-tone source.  相似文献   

3.
The moments of a nonnegative bounded waveform (e.g., bounded probability density functions or responses that can be expressed as bounded probability density functions) provide the basis for characterizing the waveform. Traditionally, only the lower-order moments (k ≤ 4) have been utilized in deriving topographical indices of these waveforms. Recent advances in waveform moment analysis, however, have made it possible to derive comprehensive and interpretable indices of complex nonnegative bounded waveforms by utilizing both lower-order and higher-order moments. Waveform moment analysis is reviewed briefly, and a flexible and efficient computer program is presented for conducting waveform moment analyses.  相似文献   

4.
5.
We examined the transient finger-pressure when subjects were exposed to the schematic faces which depict "anger," "joy," "sadness," and "no emotion," and to two nonfacial stimuli. In Exp. 1, nine undergraduate women were asked to discriminate between those target stimuli and the nontarget stimuli by pressing on the finger rest of Clynes' Sentograph, without informing subjects that this experiment was to measure emotions. In Exp. 2, the same subjects were asked to express their feelings evoked by the schematic faces by pressing on the finger rest. Results indicate that, even on the discrimination task, the differentiation in the finger-pressure waveforms among emotions was observed. Such differentiation suggests the possibility of measuring the expression of emotions with finger pressure even when the subjects are not aware their emotions are being measured. The identifiable characteristics of the waveforms are the long duration for "sadness" and the strong intensity of pressure for "anger."  相似文献   

6.
7.
Prior research has found that participants manifest complex profiles of error when they are asked to judge collinearity of stimulus elements. These studies used harmonic analysis to model the data, and found large departures from accurate collinear judgments, with the amplitude and specific angular position of departures from valid judgment varying from one participant to the next. The models appeared to be a composite of many large‐, medium‐ and small‐scale departures, but this could not be established with any certainty because of the global nature of harmonic model components. Wavelet modelling is better suited to answer the question of whether the error profile is produced by independent sources that vary in size and location. Here, we examined judgments of collinearity of dot pairs across 360° of angular position. A priori and post hoc wavelet modelling strategies were used to identify independent sources of error that could not be attributed to chance, and some new statistical protocols were applied. We found evidence of error sources at several levels of scale, and these results were confirmed by the application of cross‐validation methods that make no assumption about the nature of the error probability distribution.  相似文献   

8.
Wavelet analysis is presented as a new tool for analyzing event-related potentials (ERPs). The wavelet transform expands ERPs into a time-scale representation, which allows the analyst to zoom in on the small scale, fine structure details of an ERP or zoom out to examine the large scale, global waveshape. The timescale representation is closely related to the more familiar time-frequency representation used in spectrograms of time-varying signals. However, time-scale representations have special properties that make them attractive for many ERP applications. In particular, time-scale representations permit theoretically unlimited time resolution for the detection of short-lived peaks and permit a flexible choice of wavelet basis functions for analyzing different types of ERPs. Generally, time-scale representations offer a formal basis for designing new, specialized filters for various ERP applications. Among recently explored applications of wavelet analysis to ERPs are (a) the precise identification of the time of occurrence of overlapping peaks in the auditory brainstem evoked response; (b) the extraction of single-trial ERPs from background EEG noise; (c) the decomposition of averaged ERP waveforms into orthogonal detail functions that isolate the waveforms experimental behavior in distinct, orthogonal frequency bands; and (d) the use of wavelet transform coefficients to concisely extract important information from ERPs that predicts human signal detection performance. In this tutorial we present an intuitive introduction to wavelets and the wavelet transform, concentrating on the multiresolution approach to wavelet analysis of ERP data. We then illustrate this approach with real data. Finally,we offer some speculations on future applications of wavelet analysis to ERP data.  相似文献   

9.
10.
Conceptual analysis of health and disease is portrayed as consisting in the confrontation of a set of criteria—a “definition”—with a set of cases, called instances of either “health” or “disease.” Apart from logical counter-arguments, there is no other way to refute an opponent’s definition than by providing counter-cases. As resorting to intensional stipulation (stipulation of meaning) is not forbidden, several contenders can therefore be deemed to have succeeded. This implies that conceptual analysis alone is not likely to decide between naturalism and normativism. An alternative to this approach would be to examine whether the concept of disease can be naturalized.  相似文献   

11.
The privacy of patients is jeopardised when medical records and data are spread or shared beyond the protected cloud of institutions. This is because breaches force them to the brink that they start abstaining from full disclosure of their condition. This type of condition has a negative effect on scientific research, patients and all stakeholders. A blockchain-based data sharing system is proposed to tackle this issue, which employs immutability and autonomy properties of the blockchain to sufficiently resolve challenges associated with access control and handle sensitive data. Our proposed system is supported by a Discrete Wavelet Transform to enhance the overall security, and a Genetic Algorithm technique to optimise the queuing optimization technique as well. Introducing this cryptographic key generator enhances the immunity and system access control, which allows verifying users securely in a fast way. This design allows further accountability since all users involved are already known and the blockchain records a log of their actions. Only when the users’ cryptographic keys and identities are confirmed, the system allows requesting data from the shared queuing requests. The achieved execution time per node, confirmation time per node and robust index for block number of 0.19 s, 0.17 s and 20 respectively that based on system evaluation illustrates that our system is robust, efficient, immune and scalable.  相似文献   

12.
13.
This study evaluated the psychoeducational reports of 205 children who were referred to the UCLA Neuropsychiatric Hospital for evaluation of academic achievement problems. Factor analysis of the report data indentified a medical factor, a school history factor, and a family factor. Following factor analysis, all three factors as well as measures of cognitive functioning and academic achievement were entered into a discriminant analysis in order to determine which variables, if any, helped predict the educational placement finally to be recommended for each child at the end of the individual psychoeducational work-up. Statistical analysis indicated different results for each of four educational placement options, the greatest predictability being associated with regular classroom placement. Across all placements, the variable that allowed for the greatest accuracy in predicting educational programming recommendations was academic achievement, measured as mean achievement score on standardized achievement tests.  相似文献   

14.
Multitrait-Multimethod (MTMM) matrices are often analyzed by means of confirmatory factor analysis (CFA). However, fitting MTMM models often leads to improper solutions, or non-convergence. In an attempt to overcome these problems, various alternative CFA models have been proposed, but with none of these the problem of finding improper solutions was solved completely. In the present paper, an approach is proposed where improper solutions are ruled out altogether and convergence is guaranteed. The approach is based on constrained variants of components analysis (CA). Besides the fact that these methods do not give improper solutions, they have the advantage that they provide component scores which can later on be used to relate the components to external variables. The new methods are illustrated by means of simulated data, as well as empirical data sets.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the first author. The authors are obliged to three anonymous reviewers and an associate editor for constructive suggestions on the first version of this paper.  相似文献   

15.
The present study used a treatment analysis following ambiguous functional analysis results to evaluate potential treatments to reduce the SIB of a 32‐year‐old male with profound mental retardation. In addition, effective treatments were determined for increasing compliance with increasingly more complex self‐care tasks. The results indicated that a positive reinforcement procedure with extinction was useful toward reducing SIB and increasing compliance during three increasingly more complex tasks. The usefulness of treatment analysis procedures following ambiguous functional analysis results are discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
17.
Evidence about a suspect's behavioural similarity across a series of crimes has been presented in legal proceedings in at least three different countries. Its admission as expert evidence, whilst still rare, is becoming more common thus it is important for us to understand how such evidence is received by jurors and legal professionals. This article reports on a qualitative analysis of mock jurors' deliberations about expert linkage analysis evidence. Three groups of mock jurors (N = 20) were presented with the prosecution's linkage analysis evidence from the USA State v. Fortin I murder trial and expert evidence for the defence constructed for the purposes of the study. Each group was asked to deliberate and reach a verdict. Deliberations were video‐recorded and subject to thematic content analysis. The themes that emerged were varied. Analysis suggested that the mock jurors were cautious of the expert evidence of behavioural similarity. In some cases they were sceptical of the expert. They articulated a preference that expert opinion be supported using statistics. Additional themes included jurors having misconceptions concerning what is typical offender behaviour during rape which suggests there is a need for expert linkage analysis evidence regarding behavioural similarities and the relative frequencies of crime scene behaviours. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
19.
Configural frequency analysis (CFA) is a widely used method of explorative data analysis. It tries to detect patterns in the data that occur significantly more or significantly less often than expected by chance. Patterns which occur more often than expected by chance are called CFA types, while those which occur less often than expected by chance are called CFA antitypes. The patterns detected are used to generate knowledge about the mechanisms underlying the data. We investigate the ability of CFA to detect adequate types and antitypes in a number of simulation studies. The basic idea of these studies is to predefine sets of types and antitypes and a mechanism which uses them to create a simulated data set. This simulated data set is then analysed with CFA and the detected types and antitypes are compared to the predefined ones. The predefined types and antitypes together with the method to generate the data are called a data generation model. The results of the simulation studies show that CFA can be used in quite different research contexts to detect structural dependencies in observed data. In addition, we can learn from these simulation studies how much data is necessary to enable CFA to reconstruct the predefined types and antitypes with sufficient accuracy. For one of the data generation models investigated, implicitly underlying knowledge space theory, it was shown that zero‐order CFA can be used to reconstruct the predefined types (which can be interpreted in this context as knowledge states) with sufficient accuracy. Theoretical considerations show that first‐order CFA cannot be used for this data generation model. Thus, it is wrong to consider first‐order CFA, as is done in many publications, as the standard or even only method of CFA.  相似文献   

20.
Analysis of scholarly citations involving behavioral journals reveals that, consistent with its mission, applied behavior analysis research frequently references the basic behavioral literature but, as some have suspected, exerts narrow scholarly influence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号