• Title/Summary/Keyword: Feature Dimensional Reduction

Search Result 85, Processing Time 0.025 seconds

Feature Extraction via Sparse Difference Embedding (SDE)

  • Wan, Minghua;Lai, Zhihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3594-3607
    • /
    • 2017
  • The traditional feature extraction methods such as principal component analysis (PCA) cannot obtain the local structure of the samples, and locally linear embedding (LLE) cannot obtain the global structure of the samples. However, a common drawback of existing PCA and LLE algorithm is that they cannot deal well with the sparse problem of the samples. Therefore, by integrating the globality of PCA and the locality of LLE with a sparse constraint, we developed an improved and unsupervised difference algorithm called Sparse Difference Embedding (SDE), for dimensionality reduction of high-dimensional data in small sample size problems. Significantly differing from the existing PCA and LLE algorithms, SDE seeks to find a set of perfect projections that can not only impact the locality of intraclass and maximize the globality of interclass, but can also simultaneously use the Lasso regression to obtain a sparse transformation matrix. This characteristic makes SDE more intuitive and more powerful than PCA and LLE. At last, the proposed algorithm was estimated through experiments using the Yale and AR face image databases and the USPS handwriting digital databases. The experimental results show that SDE outperforms PCA LLE and UDP attributed to its sparse discriminating characteristics, which also indicates that the SDE is an effective method for face recognition.

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

A Feature Selection for the Recognition of Handwritten Characters based on Two-Dimensional Wavelet Packet (2차원 웨이브렛 패킷에 기반한 필기체 문자인식의 특징선택방법)

  • Kim, Min-Soo;Back, Jang-Sun;Lee, Guee-Sang;Kim, Soo-Hyung
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.521-528
    • /
    • 2002
  • We propose a new approach to the feature selection for the classification of handwritten characters using two-dimensional(2D) wavelet packet bases. To extract key features of an image data, for the dimension reduction Principal Component Analysis(PCA) has been most frequently used. However PCA relies on the eigenvalue system, it is not only sensitive to outliers and perturbations, but has a tendency to select only global features. Since the important features for the image data are often characterized by local information such as edges and spikes, PCA does not provide good solutions to such problems. Also solving an eigenvalue system usually requires high cost in its computation. In this paper, the original data is transformed with 2D wavelet packet bases and the best discriminant basis is searched, from which relevant features are selected. In contrast to PCA solutions, the fast selection of detailed features as well as global features is possible by virtue of the good properties of wavelets. Experiment results on the recognition rates of PCA and our approach are compared to show the performance of the proposed method.

API Feature Based Ensemble Model for Malware Family Classification (악성코드 패밀리 분류를 위한 API 특징 기반 앙상블 모델 학습)

  • Lee, Hyunjong;Euh, Seongyul;Hwang, Doosung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.531-539
    • /
    • 2019
  • This paper proposes the training features for malware family analysis and analyzes the multi-classification performance of ensemble models. We construct training data by extracting API and DLL information from malware executables and use Random Forest and XGBoost algorithms which are based on decision tree. API, API-DLL, and DLL-CM features for malware detection and family classification are proposed by analyzing frequently used API and DLL information from malware and converting high-dimensional features to low-dimensional features. The proposed feature selection method provides the advantages of data dimension reduction and fast learning. In performance comparison, the malware detection rate is 93.0% for Random Forest, the accuracy of malware family dataset is 92.0% for XGBoost, and the false positive rate of malware family dataset including benign is about 3.5% for Random Forest and XGBoost.

Modified Speeded Up Robust Features(SURF) for Performance Enhancement of Mobile Visual Search System (모바일 시각 검색 시스템의 성능 향상을 위하여 개선된 Speeded Up Robust Features(SURF) 알고리듬)

  • Seo, Jung-Jin;Yoona, Kyoung-Ro
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.388-399
    • /
    • 2012
  • In the paper, we propose enhanced feature extraction and matching methods for a mobile environment based on modified SURF. We propose three methods to reduce the computational complexity in a mobile environment. The first is to reduce the dimensions of the SURF descriptor. We compare the performance of existing 64-dimensional SURF with several other dimensional SURFs. The second is to improve the performance using the sign of the trace of the Hessian matrix. In other words, feature points are considered as matched if they have the same sign for the trace of the Hessian matrix, otherwise considered not matched. The last one is to find the best distance-ratio which is used to determine the matching points. We find the best distance-ratio through experiments, and it gives the relatively high accuracy. Finally, existing system which is based on normal SURF method is compared with our proposed system which is based on these three proposed methods. We present that our proposed system shows reduced response time while preserving reasonably good matching accuracy.

The Design and Practice of Disaster Response RL Environment Using Dimension Reduction Method for Training Performance Enhancement (학습 성능 향상을 위한 차원 축소 기법 기반 재난 시뮬레이션 강화학습 환경 구성 및 활용)

  • Yeo, Sangho;Lee, Seungjun;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.263-270
    • /
    • 2021
  • Reinforcement learning(RL) is the method to find an optimal policy through training. and it is one of popular methods for solving lifesaving and disaster response problems effectively. However, the conventional reinforcement learning method for disaster response utilizes either simple environment such as. grid and graph or a self-developed environment that are hard to verify the practical effectiveness. In this paper, we propose the design of a disaster response RL environment which utilizes the detailed property information of the disaster simulation in order to utilize the reinforcement learning method in the real world. For the RL environment, we design and build the reinforcement learning communication as well as the interface between the RL agent and the disaster simulation. Also, we apply the dimension reduction method for converting non-image feature vectors into image format which is effectively utilized with convolution layer to utilize the high-dimensional and detailed property of the disaster simulation. To verify the effectiveness of our proposed method, we conducted empirical evaluations and it shows that our proposed method outperformed conventional methods in the building fire damage.

A Node2Vec-Based Gene Expression Image Representation Method for Effectively Predicting Cancer Prognosis (암 예후를 효과적으로 예측하기 위한 Node2Vec 기반의 유전자 발현량 이미지 표현기법)

  • Choi, Jonghwan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.397-402
    • /
    • 2019
  • Accurately predicting cancer prognosis to provide appropriate treatment strategies for patients is one of the critical challenges in bioinformatics. Many researches have suggested machine learning models to predict patients' outcomes based on their gene expression data. Gene expression data is high-dimensional numerical data containing about 17,000 genes, so traditional researches used feature selection or dimensionality reduction approaches to elevate the performance of prognostic prediction models. These approaches, however, have an issue of making it difficult for the predictive models to grasp any biological interaction between the selected genes because feature selection and model training stages are performed independently. In this paper, we propose a novel two-dimensional image formatting approach for gene expression data to achieve feature selection and prognostic prediction effectively. Node2Vec is exploited to integrate biological interaction network and gene expression data and a convolutional neural network learns the integrated two-dimensional gene expression image data and predicts cancer prognosis. We evaluated our proposed model through double cross-validation and confirmed superior prognostic prediction accuracy to traditional machine learning models based on raw gene expression data. As our proposed approach is able to improve prediction models without loss of information caused by feature selection steps, we expect this will contribute to development of personalized medicine.

Dimension Reduction of Solid Models by Mid-Surface Generation

  • Sheen, Dong-Pyoung;Son, Tae-Geun;Ryu, Cheol-Ho;Lee, Sang-Hun;Lee, Kun-Woo
    • International Journal of CAD/CAM
    • /
    • v.7 no.1
    • /
    • pp.71-80
    • /
    • 2007
  • Recently, feature-based solid modeling systems have been widely used in product design. However, for engineering analysis of a product model, an ed CAD model composed of mid-surfaces is desirable for conditions in which the ed model does not affect analysis result seriously. To meet this requirement, a variety of solid ion methods such as MAT (medial axis transformation) have been proposed to provide an ed CAE model from a solid design model. The algorithm of the MAT approach can be applied to any complicated solid model. However, additional work to trim and extend some parts of the result is required to obtain a practically useful CAE model because the inscribed sphere used in the MAT method generates insufficient surfaces with branches. On the other hand, the mid-surface ion approach supports a practical method for generating a two-dimensional ed model, even though it has difficulties in creating a mid-surface from some complicated parts. In this paper, we propose a dimension reduction approach on solid models based on the midsurface abstraction approach. This approach simplifies the solid model by abbreviating or removing trivial features first such as the fillet, mounting, or protrusion. The geometry of each face is replaced with mid-patches from the simplified model, and then unnecessary topological entities are deleted to generate a clean ed model. Also, additional work, such as extending and stitching mid-patches, completes the generation of a mid-surface model from the patches.

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.