首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Automatic disease classification has been one of the most intensively searched in recent years due to the possibility of quickly providing a diagnosis to the patient. In this process, the segmentation of regions of interest of these diseases has a fundamental role in their subsequent classification. With skin lesions segmentation it is no different and in recent years many studies have achieved interesting results, becoming an important tool in aiding the medical diagnosis of skin diseases. In this work, a morphological geodesic active contour segmentation (MGAC) method is proposed with automatic initialization, using mathematical morphology which is a great partial differential equation approximation, with lower computational cost, no stability problems and fully automatic. The proposed method was tested in a stable and well-known dermoscopic images database provided by Pedro Hispano Hospital (PH2) and was compared with both methods that make use of machine learning or deep learning techniques such as fully convolutional networks (FCN), full resolution convolutional networks (FrCN), deep class-specific learning with probability based step-wise integration (DCL-PSI), and others, and also traditional methods like JSEG, statistical region merging (SRM), Level Set, ASLM and others. The MGAC showed good results in all similarity metrics compared in this work like Jaccard Index (86.16%), Dice coefficient (92.09%) and Matthew correlation coefficient (87.52%), and also achieves good results in sensitivity (91.72%), specificity (97.99%), accuracy (94.59%) and F-measure (93.82%). Thus, the proposed method presented better results in relation to all these metrics when compared to the traditional methods and still presented better results in relation to the methods that use machine learning or deep learning techniques in Jaccard Index, Dice coefficient and specificity. This confirm that the MGAC can efficiently segment skin lesions, presenting great potential to be applied in the aid of the medical diagnosis.  相似文献   

2.
Iris segmentation is one of the complex research areas, as human eyes contain intricate details and are difficult to process. Due to the advancement of technology, security based application employ biometrics to ensure the identity of an individual. Though fingerprint is the commonly utilized biometric, iris is the most promising biometric. However, extracting the iris is not a simple task and the iris recognition accuracy depends on the effectiveness of the iris segmentation. Taking this statement into account, this work proposes a reliable iris segmentation algorithm which is based on SuperPixel Segmentation (SPS). Initially, the eyelids and pupil of the eye image are detected. This is followed by the segmentation of iris. The proposed approach is applied over four benchmark datasets such as CASIA iris V1, V2, V3 and Ubiris V2 databases. The performance of the proposed iris segmentation algorithm is compared with the existing techniques. The proposed segmentation algorithm proves its stability in all the datasets with respect to segmentation accuracy, sensitivity and specificity.  相似文献   

3.
A novel classification framework for clinical decision making that uses an Extremely Randomized Tree (ERT) based feature selection and a Diverse Intensified Strawberry Optimized Neural network (DISON) is proposed. DISON is a Feed Forward Artificial Neural Network where the optimization of weights and bias is done using a two phase training strategy. Two algorithms namely Strawberry Plant Optimization (SPO) algorithm and Gradient-descent Back-propagation algorithm are used sequentially to identify the optimum weights and bias. The novel two phase training method and the stochastic duplicate-elimination strategy of SPO helps in addressing the issue of local optima associated with conventional neural networks. The relevant attributes are selected based on the feature importance values computed using an ERT classifier.Vertebral Column, Pima Indian diabetes (PID), Cleveland Heart disease (CHD) and Statlog Heart disease (SHD) datasets from the University of California Irvine machine learning repository are used for experimentation. The framework has achieved an accuracy of 87.17% for Vertebral Column, 90.92% for PID, 93.67% for CHD and 94.5% for SHD. The classifier performance has been compared with existing works and is found to be competitive in terms of accuracy, sensitivity and specificity. Wilcoxon test confirms the statistical superiority of the proposed method.  相似文献   

4.
This paper aims to apply deep learning to identify autism spectrum disorder (ASD) patients from a large brain imaging dataset based on the patients’ brain activation patterns. The brain images are collected from the ABIDE (Autism Brain Imaging Data Exchange) database. The proposed convolutional neural network (CNN) architecture investigates functional connectivity patterns between different brain areas to identify specifics patterns to diagnose ASD. The enhanced CNN uses blocks of temporal convolutional layers that employ casual convolutions and dilations; hence, it is suitable for sequential data with temporality large receptive fields. Experimental results show that the proposed ECNN achieves an accuracy of up to 80% accuracy. These patterns show an anticorrelation of brain function between anterior and posterior areas of the brain; that is, the disruption in brain connectivity is one primary evidence of ASD.  相似文献   

5.
ABSTRACT

Episodic memory is the first and most severely affected cognitive domain in Alzheimer's disease (AD), and it is also the key early marker in prodromal stages including amnestic mild cognitive impairment (MCI). The relative ability of memory tests to discriminate between MCI and normal aging has not been well characterized. We compared the classification value of widely used verbal memory tests in distinguishing healthy older adults (n = 51) from those with MCI (n = 38). Univariate logistic regression indicated that the total learning score from the California Verbal Learning Test-II (CVLT-II) ranked highest in terms of distinguishing MCI from normal aging (sensitivity = 90.2; specificity = 84.2). Inclusion of the delayed recall condition of a story memory task (i.e., WMS-III Logical Memory, Story A) enhanced the overall accuracy of classification (sensitivity = 92.2; specificity = 94.7). Combining Logical Memory recognition and CVLT-II long delay best predicted progression from MCI to AD over a 4-year period (accurate classification = 87.5%). Learning across multiple trials may provide the most sensitive index for initial diagnosis of MCI, but inclusion of additional variables may enhance overall accuracy and may represent the optimal strategy for identifying individuals most likely to progress to dementia.  相似文献   

6.
Facial expressions play a crucial role in emotion recognition as compared to other modalities. In this work, an integrated network, which is capable of recognizing emotion intensity levels from facial images in real time using deep learning technique is proposed. The cognitive study of facial expressions based on expression intensity levels are useful in applications such as healthcare, coboting, Industry 4.0 etc. This work proposes to augment emotion recognition with 2 other important parameters, valence and emotion intensity. This helps in better automated responses by a machine to an emotion. The valence model helps in classifying emotion as positive and negative emotions and discrete model classifies emotions as happy, anger, disgust, surprise and neutral state using Convolution Neural Network (CNN). Feature extraction and classification are carried out using CMU Multi-PIE database. The proposed architecture achieves 99.1% and 99.11% accuracy for valence model and discrete model respectively for offline image data with 5-fold cross validation. The average accuracy achieved in real time for valance model and discrete model is 95% & 95.6% respectively. Also, this work contributes to build a new database using facial landmarks, with three intensity levels of facial expressions which helps to classify expressions into low, mild and high intensities. The performance is also tested for different classifiers. The proposed integrated system is configured for real time Human Robot Interaction (HRI) applications on a test bed consisting of Raspberry Pi and RPA platform to assess its performance.  相似文献   

7.
Bennett DJ 《Perception》2007,36(3):375-390
In a number of studies reaction time has been found to increase with increases in size ratio on a same -different form-comparison task. Bennett and Warren (2002, Perception & Psychophysics 64 462-477) teased apart environmental and retinal size ratios by showing the forms along a (simulated) texture hallway, viewed monocularly, on a simultaneous form-comparison task; roughly equal effects of environmental and retinal size ratios were found. The current study enhanced scene-size information by showing the forms in simulated stereo and by adding texture to the forms themselves; special care was also taken to perceptually isolate the stimuli. In two experiments, with different kinds of forms, strong effects of environmental size ratio were found; no effect of retinal size ratio (same trials) was observed in either experiment. The results support the hypothesis that the (same-trial) form-size codings reflect 'all told' estimates of environmental size, and place constraints on modeling the functional architecture of the visual system.  相似文献   

8.
In the present study, we discuss reliability, consistency, and method specificity based on the CT-C(M?-?1) model, which provides clear definitions of trait and method factors and can facilitate parameter estimation. Properties of the reliability coefficient, the consistency coefficient, and the method-specificity coefficient of the summated score for a trait factor are addressed. The consistency coefficient and the method-specificity coefficient are both functions of the number of items, the average item consistency, and the average item method specificity. The usefulness of the findings is demonstrated in an alternative approach proposed for scale reduction. The approach, taking into account both traits and methods, helps identify the items leading to the maximum of convergent validity or method effects. The approach, illustrated with a simulated data set, is recommended for scale development based on multitrait-multimethod designs.  相似文献   

9.
Visible surfaces of three-dimensional objects are reconstructed from two-dimensional retinal images in the early stages of human visual processing. In the computational model of surface reconstruction based on the standard regularization theory, an energy function is minimized. Two types of model have been proposed, called "membrane" and "thin-plate" after their function formulas, in which the first or the second derivative of depth information is used. In this study, the threshold of surface reconstruction from binocular disparity was investigated using a sparse random dot stereogram, and the predictive accuracy of these models was evaluated. It was found that the thin-plate model reconstructed surfaces more accurately than the membrane model and showed good agreement with experimental results. The likelihood that these models imitate human processing of visual information is discussed in terms of the size of receptive fields in the visual pathways of the human cortex.  相似文献   

10.
元分析是根据现有研究对感兴趣的主题得出比较准确和有代表性结论的一种重要方法,在心理、教育、管理、医学等社会科学研究中得到广泛应用。信度是衡量测验质量的重要指标,用合成信度能比较准确的估计测验信度。未见有文献提供合成信度元分析方法。本研究在比较对参数进行元分析的三种模型优劣的基础上,在变化系数模型下推出合成信度元分析点估计及区间估计的方法;以区间覆盖率为衡量指标,模拟研究表明本研究提出的合成信度元分析区间估计的方法得当;举例说明如何对单维测验的合成信度进行元分析。  相似文献   

11.
We propose a hierarchical neural architecture able to recognise observed human actions. Each layer in the architecture represents increasingly complex human activity features. The first layer consists of a SOM which performs dimensionality reduction and clustering of the feature space. It represents the dynamics of the stream of posture frames in action sequences as activity trajectories over time. The second layer in the hierarchy consists of another SOM which clusters the activity trajectories of the first-layer SOM and learns to represent action prototypes. The third- and last-layer of the hierarchy consists of a neural network that learns to label action prototypes of the second-layer SOM and is independent – to certain extent – of the camera’s angle and relative distance to the actor. The experiments were carried out with encouraging results with action movies taken from the INRIA 4D repository. In terms of representational accuracy, measured as the recognition rate over the training set, the architecture exhibits 100% accuracy indicating that actions with overlapping patterns of activity can be correctly discriminated. On the other hand, the architecture exhibits 53% recognition rate when presented with the same actions interpreted and performed by a different actor. Experiments on actions captured from different view points revealed a robustness of our system to camera rotation. Indeed, recognition accuracy was comparable to the single viewpoint case. To further assess the performance of the system we have also devised a behavioural experiments in which humans were asked to recognise the same set of actions, captured from different points of view. Results form such a behavioural study let us argue that our architecture is a good candidate as cognitive model of human action recognition, as architectural results are comparable to those observed in humans.  相似文献   

12.
Intrusion Detection Systems (IDSs) is a system that monitors network traffic for suspicious activity and issues alert when such activity is revealed. Moreover, the existing IDSs-based methods are based on outdated attacks that unable to identify modern attacks or malicious trends. For this reason, in this study we developed a new multi-swarm adaptive grasshopper optimization algorithm to utilize adaptation mechanism in a group of swarms based on fuzzy logic to protect against sophisticated attacks. The proposed (MSAGOA) technique has the capability of global optimization and rapid convergence that are used to attain optimal feature subsets to identify attack types on IDS datasets. In the MSAGOA technique, learning engine as Extreme learning Machine, Naive Bayes, Random Forest and Decision Tree is applied as a fitness function to select the highly discriminating features and to maximize classification performance. Afterward, select the best classifier which works as a fitness function in our approach to measure the performance in terms of accuracy, detection rate, and false alarm rate. The simulations are performed on three IDS datasets such as NSL-KDD, AWID-ATK-R, and NGIDS-DS. The experimental results demonstrated that MSAGOA method has performed better and obtained high detection rate of 99.86%, accuracy of 99.89% in NSL-KDD and high detection rate of 98.73%, accuracy of 99.67% in AWID-ATK-R and detection rate of 89.50%, accuracy of 90.23% in NGIDS-DS. In addition, the performance is compared with several other existing techniques to show the efficacy of the proposed approach.  相似文献   

13.
Evidence suggests that for close retinal areas a correlation in sensitivity exists between the areas. Correlations as a function of distance from fixation and between areas were studied. In Experiment I, both forms were equidistant from fixation and five different distances apart. In Experiment II, both forms fell on an imaginary line through fixation. The forms usually did not fall on equally sensitive areas as in Experiment I. Both experiments showed that accuracy was lower for two- than for one-form displays. Closer forms had the lowest accuracy, suggesting perhaps mutual contour masking. However, in the sense of an intratrial correlation, no significant relationships were found. These two types of independence were discussed in terms of contour masking and varying sensitivity.  相似文献   

14.
于文勃  梁丹丹 《心理科学进展》2018,26(10):1765-1774
词是语言的基本结构单位, 对词语进行切分是语言加工的重要步骤。口语语流中的切分线索来自于语音、语义和语法三个方面。语音线索包括概率信息、音位配列规则和韵律信息, 韵律信息中还包括词重音、时长和音高等内容, 这些线索的使用在接触语言的早期阶段就逐渐被个体所掌握, 而且在不同的语言背景下有一定的特异性。语法和语义线索属于较高级的线索机制, 主要作用于词语切分过程的后期。后续研究应从语言的毕生发展和语言的特异性两个方面考察口语语言加工中的词语切分线索。  相似文献   

15.
Fingerprinting is the most widely used and recognised biometric technology for human authentication. Fingerprint authentication has a proven record as highly secure and convenient as compared to passwords. Hence, fingerprint sensing has come to berecognized as a common and product-differentiating feature in smartphones, tablets and PCs. This paper proposes to develop a Fingerprint recognition system for authentication of personsby using a new technique, termed as ‘associative memory with modify multi-connect architecture’. This, in turn, may pave the way to develop more efficient Fingerprint systemshaving accuracy and lesserprocessing time. Further, with application of additional tranches of associative memory, such systems in the future will acquirepotential to perform highly complex operations and save memory. In this paper, three databases viz., FVC (2004) database, internal database and International NIST database 4 are used. FVC (2004) database contains 640 fingerprint patterns, while internal database contains 2500 different fingerprint patterns; and the International NIST database 4 consists of 2000 pairs of fingerprint patterns. The proposed fingerprint recognition system has an average accuracy of 99.56% and a pattern recognition processing time of approximately 30 s.  相似文献   

16.
In decision situations of everyday life, the potential positive or negative consequences of a decision are often specified and the associated probabilities are known or they are principally calculable ("decisions under risk"). On the basis of correlations reported in patient studies, it has been recently proposed that decisions under risk involve strategic components, i.e. calculation of the risk, as well as emotional processes, i.e. processing feedback from previous decisions. However, the potential impact of calculative strategies on decision-making under risk has not been investigated systematically, so far. In the current study, we examined 42 healthy subjects (21 females) with the Game of Dice Task measuring decisions under risk, and a questionnaire assessing strategy application in items comparable to the choices in the Game of Dice Task. In addition, the subjects performed the Iowa Gambling Task, examining decision-making under ambiguity, and a neuropsychological test battery focusing on executive functions. Results indicate that deciding advantageously in a decision-making task with explicit and stable rules is linked to applying calculative strategies. In contrast, individuals who decide intuitively prefer risky or disadvantageous choices in the Game of Dice Task. Applying calculative strategies was correlated with executive functions but not with performance on the Iowa Gambling Task. The results support the view that calculative processes and strategies may improve decision-making under explicit risk conditions.  相似文献   

17.
On Similarity Coefficients for 2×2 Tables and Correction for Chance   总被引:2,自引:0,他引:2  
This paper studies correction for chance in coefficients that are linear functions of the observed proportion of agreement. The paper unifies and extends various results on correction for chance in the literature. A specific class of coefficients is used to illustrate the results derived in this paper. Coefficients in this class, e.g. the simple matching coefficient and the Dice/Sørenson coefficient, become equivalent after correction for chance, irrespective of what expectation is used. The coefficients become either Cohen’s kappa, Scott’s pi, Mak’s rho, Goodman and Kruskal’s lambda, or Hamann’s eta, depending on what expectation is considered appropriate. Both a multicategorical generalization and a multivariate generalization are discussed.  相似文献   

18.
19.
Two letters varying in level of confusability were presented either simultaneously for 75 msec or sequentially for 75 msec each in adjacent retinal locations. The retinal locus of presentation varied from trial to trial, and subjects both identified and located the presented letters. Identification accuracy was higher on nonconfusable than on confusable letter pairs in the simultaneous condition, but not in the sequential condition. This result is interpreted as support for the notion that inhibition between similar or identical features shared by confusable letters occurs only when letters are presented simultaneously. A relative position effect, with performance on the peripheral letter higher than on the central letter, was found for simultaneously and second sequentially presented letters, but not for first sequentially presented letters. This result is interpreted in terms of the assumption that feature perturbations, with foveal perturbations more likely than peripheral perturbations, affect simultaneously and secondpresented letters, but not first-presented letters. The pattern of results for relative location accuracy showed many of the same effects as identification performance. A model assuming location errors reflect feature transpositions is outlined and is able to account for the absolute and relative location results and the correlation between relative location and identification accuracy.  相似文献   

20.
Accurate glioma detection using magnetic resonance imaging (MRI) is a complicated job. In this research, deep learning model is presented for glioma and stroke lesion detection. The proposed architecture consists of 14 layers. The first input layer is followed by three convolutional layers while 5th, 6th and 7th layers correspond to batch normalization, followed by next three layers of rectified linear unit (ReLU). Eleventh layer is average pooling 2D which is followed by fully connected (FC), softmax and classification layers respectively. The presented method is verified on six MICCAI databases namely multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016, 2017 and sub-acute ischemic stroke lesion segmentation (SISS-ISLES) 2015. The computational time is also measured across each benchmark dataset such as 53 s on BRATS 2013, 26 s on BRATS 2014, 41 s on BRATS 2015, 36 s on BRATS 2016, and 38 s on BRATS 2017 and 4.13 s on ISLES 2015 proving that the proposed technique has less processing time. The proposed method achieved 0.9943 ACC, 1.00 SP, 0.9839 SE on BRATS 2013, 0.9538 ACC, 0.9991 SP, 0.7196 SE on BRATS 2014, 0.9978 ACC, 1.00 SP, 0.9919 SE on BRATS 2015, 0.9569 ACC, 0.9491 SP, 0.9755 SE on BRAST 2016, 0.9778 ACC, 0.9770 SP, 0.9789 SE on BRATS 2017 and 0.9227 ACC, 1.00 SP, 0.8814 SP on ISLES 2015 datasets respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号