首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In this paper a novel method based on facial skin aging features and Artificial Neural Network (ANN) is proposed to classify the human face images into four age groups. The facial skin aging features are extracted by using Local Gabor Binary Pattern Histogram (LGBPH) and wrinkle analysis. The ANN classifier is designed by using two layer feedforward backpropagation neural networks. The proposed age classification framework is trained and tested with face images from PAL face database and shown considerable improvement in the age classification accuracy up to 94.17% and 93.75% for male and female respectively.  相似文献   

3.
4.
In image processing, image enhancement is a vital processing chore. The image enhancement can improve the image quality by removing either blur or any kind of noise in the image. Image enhancement technique is utilized in many applications, such as medical, satellite, agriculture, oceanography and so on. This paper focuses on the IoT satellite applications. Most of the satellite images are essential to have high resolution satellite images, low resolution images are majorly affected by absorption, scattering, spatial resolution and spectral resolution issues. For better resolution of these kinds of issued images, Discrete Wavelet Transform (DWT) based interpolation method, combination of DWT and stationary wavelet transform (SWT) methods, bicubic interpolation methods are utilized. However, DWT with SWT method is failed avoid distorted in the resultant images, the bicubic interpolation method is quite complex and cannot give a clear image. DWT based interpolation method lose linear features and unwanted oscillations are occurred and edges data is lost. Therefore, DWT and Gabor technique is proposed to overcome existing method issues. DWT is decomposed into multiple sub-bands; GWT is employed to minimize the loss of information in wavelet domain. The advantages of the GWT are less complexity, remove the noise, and also sharp the image. The proposed method of the PSNR, MSE is compared with existing methods by using the different satellite images.  相似文献   

5.
Saunders JA 《Perception》2003,32(2):211-233
Texture can be an effective source of information for perception of slant and curvature. A computational assumption required for some texture cues is that texture must be flat along a surface. There are many textures which violate this assumption, and have some sort of texture relief: variations perpendicular to the surface. Some examples include grass, which has vertical elements, or scattered rocks, which are volumetric elements with 3-D shapes. Previous studies of perception of slant from texture have not addressed the case of textures with relief. The experiments reported here test judgments of slant for textures with various types of relief, including textures composed of bumps, columns, and oriented elements. The presence of texture relief was found to affect judgments, indicating that perception of slant from texture is not robust to violations of the flat-texture assumption. For bumps and oriented elements, slant was underestimated relative to matching flat textures, while for columns textures, which had visible flat top faces, perceived slant was equal or greater than for flat textures. The differences can be explained by the way different types of texture relief affect the amount of optical compression in the projected image, which would be consistent with results from previous experiments using cue conflicts in flat textures. These results provide further evidence that compression contributes to perception of slant from texture.  相似文献   

6.
We present a computational model for human texture perception which assigns functional principles to the Gestalt laws of similarity and proximity. Motivated by early vision mechanisms, in the first stage, local texture features are extracted by utilizing multi-scale filtering and nonlinear spatial pooling. In the second stage, features are grouped according to the spatial feature binding model of the competitive layer model (CLM; Wersing et al. 2001). The CLM uses cooperative and competitive interactions in a recurrent network, where binding is expressed by the layer-wise coactivation of feature-representing neurons. The Gestalt law of similarity is expressed by a non-Euclidean distance measure in the abstract feature space with proximity being taken into account by a spatial component. To choose the stimulus dimensions which allow the most salient similarity-based texture segmentation, the feature similarity metrics is reduced to the directions of maximum variance. We show that our combined texture feature extraction and binding model performs segmentation in strong conformity with human perception. The examples range from classical microtextures and Brodatz textures to other classical Gestalt stimuli, which offer a new perspective on the role of texture for more abstract similarity grouping.  相似文献   

7.
Although many authors generated comprehensible models from individual networks, much less work has been done in the explanation of ensembles. DIMLP is a special neural network model from which rules are generated at the level of a single network and also at the level of an ensemble of networks. We applied ensembles of 25 DIMLP networks to several datasets of the public domain and a classification problem related to post-translational modifications of proteins. For the classification problems of the public domain, the average predictive accuracy of rulesets extracted from ensembles of neural networks was significantly better than the average predictive accuracy of rulesets generated from ensembles of decision trees. By varying the architectures of DIMLP networks we found that the average predictive accuracy of rules, as well as their complexity were quite stable. The comparison to other rule extraction techniques applied to neural networks showed that rules generated from DIMLP ensembles gave very good results. In the last problem related to bioinformatics, the best result obtained by ensembles of DIMLP networks was also significantly better than the best result obtained by ensembles of decision trees. Thus, although neural networks take much longer to train than decision trees and also rules are generated at a greater computational cost (however, still polynomial), at least for several classification problems it was worth using neural network ensembles, as extracted rules were more accurate, on average. The DIMLP software is available for PC-Linux under http://us.expasy.org/people/Guido.Bologna.html.  相似文献   

8.
Adult subjects made monocular size judgements in two experiments in which the independent variables of surface texture and restrictions on viewing conditions were manipulated. Texture density gradients of stimulation had a significant influence of size judgements only under the less reduced conditions of observation when subjects could see other textured surfaces beyond the surfaces over which judgements were made. Identical manipulations of surface texture had earlier been found to have a highly significant influence on relative distance judgements (Newman, 1971). The principally negative results were thus taken to imply that subjects extract different information from the texture density gradient when judging size from that extracted when judging relative distance.  相似文献   

9.
Power Quality (PQ) is becoming more and more important day by day in the electric network. Signal processing, pattern recognition and machine learning are increasingly being studied for the automatic recognition of any disturbances that may occur during the generation, transmission, and distribution of electricity. There are three main steps to identify the PQ disturbances. These include the use of signal processing methods to calculate the features representing the disturbances, the selection of those that are more useful than these feature sets to prevent the creation of a complex classification model, the creating a classification model that recognizes multiple classes using the selected feature subsets. In this study, one-dimensional (1D) PQ disturbances signals are transformed into two-dimensional (2D) signals, 2D discrete wavelet transforms (2D-DWT) are used to extract the features. The features are extracted by using the wavelet families such as Daubechies, Biorthogonal, Symlets, Coiflets and Fejer-Korovkin in 2D-DWT to analyze PQ disturbances. Whale Optimization Algorithm (WOA) and k-nearest neighbor (KNN) classifier determine the feature subsets. Then, WOA and k nearest neighbor (KNN) classifier are used to determine the feature group. By using KNN and Support Vector Machines (SVM) classification methods, Classifier models that distinguish PQ disturbances are formed. The main aim of the study is to determine the features derived from 2D wavelet coefficients for different wavelet families and to determine which of them has a better classification performance to distinguish PQ disturbances signals. At the same time, different classification methods are simulated and a model which can classify PQ disturbances signals with high performance is created. Also, the generated models are analysed for their performance in terms of different noise levels (40 dB, 30 dB, 20 dB). The result of this simulation study shows that the model developed to classify PQ disturbances is superior to conventional models and other 2D signal processing methods in the literature. In addition, it was concluded that the proposed method can cope better with noisy signals by low computational complexity and higher classification rate.  相似文献   

10.
We study the semantic relationship between pairs of nouns of concrete objects such as “HORSE - SHEEP” and “SWING - MELON” and how this relationship activity is reflected in EEG signals. We collected 18 sets of EEG records; each set containing 150 events of stimulation. In this work we focus on feature extraction algorithms. Particularly, we highlight Common Spatial Pattern (CSP) as a method of feature extraction. Based on these latter, different classifiers were trained in order to associate a set of signals to a previously learned human answer, pertaining to two classes: semantically related, or not semantically related. The results of classification accuracy were evaluated comparing with other four methods of feature extraction, and using classification algorithms from five different families. In all cases, classification accuracy was benefited from using CSP instead of FDTW, LPC, PCA or ICA for feature extraction. Particularly with the combination CSP-Naïve Bayes we obtained the best average precision of 84.63%.  相似文献   

11.
Due to the progression in computer vision technology, object recognition systems have gained considerable research interest. Though there are numerous object recognition systems in the literature, there is always a constant demand for better object recognition systems. Taking this as a challenge, this work proposes a novel object recognition system based on points of interest and feature extraction. Initially, the points of interest of the image are selected by means of Derivative Kadir-Brady (DKB) detector and the neighbourhood pixels of a particular window size are selected for further processing. The gabor and curvelet features are extracted from the area of interest, followed by the Support Vector Machine (SVM) classification. The performance of the proposed object recognition system is evaluated against three analogous techniques in terms of accuracy, precision, recall and F-measure. On experimental analysis, it is proven that the proposed approach outperforms the existing approaches and the performance of the proposed work is satisfactory.  相似文献   

12.
van Tonder GJ  Ejima Y 《Perception》2000,29(10):1231-1247
We apply the 'patchwork engine' (PE; van Tonder and Ejima, 2000 Neural Networks forthcoming) to encode spaces between textons in an attempt to find a suitable feature representation of anti-textons [Williams and Julesz, 1991, in Neural Networks for Perception volume 1: Human and Machine Perception Ed. H Wechsler (San Diego, CA: Academic Press); 1992, Proceedings of the National Academy of Sciences of the USA 89 6531-6534]. With computed anti-textons it is possible to show that tessellation and distribution of anti-textons can differ from that of textons depending on the ratio of texton size to anti-texton size. From this we hypothesise that variability of anti-textons can enhance texture segregation, and test our hypothesis in two psychophysical experiments. Texture segregation asymmetry is the topic of the first test. We found that targets on backgrounds with regular anti-textons segregate more strongly than on backgrounds with highly variable anti-textons. This neatly complements other explanations for texture segregation asymmetry (e.g. Rubenstein and Sagi, 1990 Journal of the Optical Society of America A 7 1632-1643). Second the relative significance of textons and anti-textons in human texture segregation is investigated for a limited set of texture patterns. Subjects consistently judged a combination of texton and anti-texton gradients as more conspicuous than texton-only gradients, and judged texton-only gradients as being more conspicuous than anti-texton-only gradients. In the absence of strong texton gradients the regularity versus irregularity of anti-textons agrees with perceived texture segregation. Using PE outputs as anti-texton features thus enabled the conception of various useful tests on texture segregation. The PE is originally intended as a general image segmentation method based on symmetry axes. With this paper we therefore hope to relate anti-textons with visual processing in a wider sense.  相似文献   

13.
Edge detection plays an important role in image processing. With the development of deep learning, the accuracy of edge detection has been greatly improved, and people have more requirements for edge detection tasks. Most edge detection algorithms are binary edge detection methods, but there are usually multiple categories of edges in an image. In this paper, we present an accurate multi-category edge detection network, the richer category-aware semantic edge detection network (R-CASENet). In order to make full use of convolutional neural network’s powerful feature expression capabilities, we attempt to use more information from feature maps for edge feature extraction and classification. Using the ResNet101 network as the backbone, firstly we merge the building blocks in different composite blocks and down-sample to obtain the feature maps. Then we fuse the feature maps in different composite blocks to obtain the final fused classifier. Experimental results show that R-CASENet can achieve state-of-the-art performance on the large SBD dataset. Furthermore, to get precise one-pixel width edges, we also propose an edge refinement network (ERN) structure. The proposed scheme is an end-to-end method and the proposed ERN can reduce redundant points and improve computational efficiency, especially for further image processing.  相似文献   

14.
Recent developments in modeling image discrimination by feature analytic and frequency selective methods are discussed. Some issues relating to the design of two-dimensional spatial frequency filters are developed within the context of two experiments on texture discrimination using artificial and naturally occurring textures. Results of these experiments indicate that, given an adequately formulated relationship between spatial frequency and orientation tuning parameters of the filter, one can predict a variety of texture discriminations using only amplitude-specific models. Finally, guidelines are established for ascertaining when phase transmission characteristics do become critical in two-dimensional image processing by the human observer.  相似文献   

15.
Application of artificial intelligence in Bio-Medical image processing is gaining more and more importance in the field of Medical Science. The bio medical images, has to go through several steps before the diagnosis of the disease. Firstly, the images has to be acquired and preprocessing has to be done and the data has to be stored in memory. It requires huge amount of memory and processing time. Among the preprocessing steps, edge detection is one of the major step. Edge detection filters the unwanted details in the image, and preserves the edges of the image, which describe the boundary of the image. In biomedical application, for the detection of the diseases, it is very essential to have the boundary detail of the acquired image of the organ under observation. Thus it is very essential to extract the edges of the images. Power is one of the main parameters that have to be considered while dealing with biomedical instruments. The biomedical signal processing instruments should be capable of operating at low power and also at high speed. In order to segregate the images into different levels or stage, we use convolutional neural networks for classification. By having a hardware architecture for image edge detection, the computational time for pre-processing of the image can be reduced, and the hardware can be a part of acquisition device itself. In this paper a low-power architecture for edge detection to detect the biomedical images are presented. The edge detection output are given to the system, which will diagnose the diseases using image classification using convolutional neural network. In this paper, Sobel and Prewitt, algorithms are used for edge detection using 180 nm technology. The edge detection algorithms are implemented using VLSI, and digital IC design of the architecture is presented. The algorithms for edge detection is co-simulated using MATLAB and Modelsim. The architecture is first simulated using CMOS logic and new method using domino logic is presented for low power consumption.  相似文献   

16.
The authors introduce a new measure of basic-level performance (strategy length and internal practicability; SLIP). SLIP implements 2 computational constraints on the organization of categories in a taxonomy: the minimum number of feature tests required to place the input in a category (strategy length) and the ease with which these tests are performed (internal practicability). The predictive power of SLIP is compared with that of 4 other basic-level measures: context model, category feature possession, category utility, and compression measure, drawing data from other empirical work, and 3 new experiments testing the validity of the computational constraints of SLIP using computer-synthesized 3-dimensional artificial objects.  相似文献   

17.
Texture development during multi-step cross rolling of a dual-phase Fe–Cr–Ni alloy has been investigated. X-ray diffraction was used to investigate changes in crystallographic texture of both the constituent phases (austenite and ferrite) through changes in orientation distribution function. After deformation, rotated brass (rotated along φ1, i.e. the sample normal direction ND), along with a weak cube texture was observed in austenite, while a strong rotated cube texture was obtained in ferrite. Texture was also simulated for various strains using a co-deformation model by self-consistent visco-plastic (VPSC) formulation. Simulations showed strong rotated brass texture in austenite and a strongly rotated cube, α-fibre (sample rolling direction RD //<1 1 0>) and γ-fibre (ND //<1 1 1>) in ferrite after highest strain (εt = 1.6). VPSC models could not effectively capture the change in crystallographic texture during cross rolling. In ferrite, simulations showed an overestimation of γ-fibre component and an underestimation of rotated cube component. Simulated texture of austenite, on the other hand, showed an overestimation of rotated brass with an absence of cube component. The results are rationalised based on the possible role of shear banding and activation of non-octahedral slip system during cross rolling, both of which are not incorporated in conventional VPSC models.  相似文献   

18.
Essock EA  Hansen BC  Haun AM 《Perception》2007,36(5):639-649
Illusory bands at a luminance transition in space (ie an edge) are well known. Here we demonstrate illusory bands of enhanced orientations or spatial frequencies at transitions between higher-contrast and lower-contrast image content along the orientation and spatial-frequency dimensions--the dimensions of cortical spatial coding. We conclude that this illusion is a consequence of cortical-level suppression of units of similar orientations and spatial frequencies and serves to aid texture segmentation while providing efficient neural coding.  相似文献   

19.
20.
Results from previous visual search studies have suggested that abrupt onsets produce involuntary shifts of attention (i.e., attentional capture), but discontinuities in simple features such as color and brightness do not (Jonides & Yantis, 1988). In the present study we tested whether feature discontinuities (i.e., “singletons”) can produce attentional capture in a visual search task if defined “locally” or over a small spatial range. On each trial, a variable number of letters appeared, one of which differed from the others in color or intensity. The location of this singleton was uncorrelated with target location. Local discontinuities were created by embedding the letters in a dot texture. In Experiment 1, display size effects for singleton targets were not reduced with the addition of a background dot texture. Similar results were obtained in Experiment 2, regardless of variations in texture density. Experiment 3 confirmed that when targets are defined by a color or intensity singleton, they are detected preattentively, and that increasing texture density yields faster detection. We conclude that the spatial range over which feature discontinuities are defined may influence the guidance of spatial attention, but it has no influence on their ability to capture attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号