• Title/Summary/Keyword: Dimensionality Reduction

Search Result 204, Processing Time 0.02 seconds

A personalized exercise recommendation system using dimension reduction algorithms

  • Lee, Ha-Young;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.19-28
    • /
    • 2021
  • Nowadays, interest in health care is increasing due to Coronavirus (COVID-19), and a lot of people are doing home training as there are more difficulties in using fitness centers and public facilities that are used together. In this paper, we propose a personalized exercise recommendation algorithm using personalized propensity information to provide more accurate and meaningful exercise recommendation to home training users. Thus, we classify the data according to the criteria for obesity with a k-nearest neighbor algorithm using personal information that can represent individuals, such as eating habits information and physical conditions. Furthermore, we differentiate the exercise dataset by the level of exercise activities. Based on the neighborhood information of each dataset, we provide personalized exercise recommendations to users through a dimensionality reduction algorithm (SVD) among model-based collaborative filtering methods. Therefore, we can solve the problem of data sparsity and scalability of memory-based collaborative filtering recommendation techniques and we verify the accuracy and performance of the proposed algorithms.

IoT botnet attack detection using deep autoencoder and artificial neural networks

  • Deris Stiawan;Susanto ;Abdi Bimantara;Mohd Yazid Idris;Rahmat Budiarto
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1310-1338
    • /
    • 2023
  • As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats that dominate the attacks as it can easily compromise devices attached to an IoT networks/systems. The compromised devices will behave like the normal ones, thus it is difficult to recognize them. Several intelligent approaches have been introduced to improve the detection accuracy of this type of cyber-attack, including deep learning and machine learning techniques. Moreover, dimensionality reduction methods are implemented during the preprocessing stage. This research work proposes deep Autoencoder dimensionality reduction method combined with Artificial Neural Network (ANN) classifier as botnet detection system for IoT networks/systems. Experiments were carried out using 3- layer, 4-layer and 5-layer pre-processing data from the MedBIoT dataset. Experimental results show that using a 5-layer Autoencoder has better results, with details of accuracy value of 99.72%, Precision of 99.82%, Sensitivity of 99.82%, Specificity of 99.31%, and F1-score value of 99.82%. On the other hand, the 5-layer Autoencoder model succeeded in reducing the dataset size from 152 MB to 12.6 MB (equivalent to a reduction of 91.2%). Besides that, experiments on the N_BaIoT dataset also have a very high level of accuracy, up to 99.99%.

An extension of multifactor dimensionality reduction method for detecting gene-gene interactions with the survival time (생존시간과 연관된 유전자 간의 교호작용에 관한 다중차원축소방법의 확장)

  • Oh, Jin Seok;Lee, Seung Yeoun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1057-1067
    • /
    • 2014
  • Many genetic variants have been identified to be associated with complex diseases such as hypertension, diabetes and cancers throughout genome-wide association studies (GWAS). However, there still exist a serious missing heritability problem since the proportion explained by genetic variants from GWAS is very weak less than 10~15%. Gene-gene interaction study may be helpful to explain the missing heritability because most of complex disease mechanisms are involved with more than one single SNP, which include multiple SNPs or gene-gene interactions. This paper focuses on gene-gene interactions with the survival phenotype by extending the multifactor dimensionality reduction (MDR) method to the accelerated failure time (AFT) model. The standardized residual from AFT model is used as a residual score for classifying multiple geno-types into high and low risk groups and algorithm of MDR is implemented. We call this method AFT-MDR and compares the power of AFT-MDR with those of Surv-MDR and Cox-MDR in simulation studies. Also a real data for leukemia Korean patients is analyzed. It was found that the power of AFT-MDR is greater than that of Surv-MDR and is comparable with that of Cox-MDR, but is very sensitive to the censoring fraction.

Machine Learning Based Structural Health Monitoring System using Classification and NCA (분류 알고리즘과 NCA를 활용한 기계학습 기반 구조건전성 모니터링 시스템)

  • Shin, Changkyo;Kwon, Hyunseok;Park, Yurim;Kim, Chun-Gon
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.1
    • /
    • pp.84-89
    • /
    • 2019
  • This is a pilot study of machine learning based structural health monitoring system using flight data of composite aircraft. In this study, the most suitable machine learning algorithm for structural health monitoring was selected and dimensionality reduction method for application on the actual flight data was conducted. For these tasks, impact test on the cantilever beam with added mass, which is the simulation of damage in the aircraft wing structure was conducted and classification model for damage states (damage location and level) was trained. Through vibration test of cantilever beam with fiber bragg grating (FBG) sensor, data of normal and 12 damaged states were acquired, and the most suitable algorithm was selected through comparison between algorithms like tree, discriminant, support vector machine (SVM), kNN, ensemble. Besides, through neighborhood component analysis (NCA) feature selection, dimensionality reduction which is necessary to deal with high dimensional flight data was conducted. As a result, quadratic SVMs performed best with 98.7% for without NCA and 95.9% for with NCA. It is also shown that the application of NCA improved prediction speed, training time, and model memory.

Impact of Instance Selection on kNN-Based Text Categorization

  • Barigou, Fatiha
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.418-434
    • /
    • 2018
  • With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.

Action Recognition with deep network features and dimension reduction

  • Li, Lijun;Dai, Shuling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.832-854
    • /
    • 2019
  • Action recognition has been studied in computer vision field for years. We present an effective approach to recognize actions using a dimension reduction method, which is applied as a crucial step to reduce the dimensionality of feature descriptors after extracting features. We propose to use sparse matrix and randomized kd-tree to modify it and then propose modified Local Fisher Discriminant Analysis (mLFDA) method which greatly reduces the required memory and accelerate the standard Local Fisher Discriminant Analysis. For feature encoding, we propose a useful encoding method called mix encoding which combines Fisher vector encoding and locality-constrained linear coding to get the final video representations. In order to add more meaningful features to the process of action recognition, the convolutional neural network is utilized and combined with mix encoding to produce the deep network feature. Experimental results show that our algorithm is a competitive method on KTH dataset, HMDB51 dataset and UCF101 dataset when combining all these methods.

A selective review of nonlinear sufficient dimension reduction

  • Sehun Jang;Jun Song
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.247-262
    • /
    • 2024
  • In this paper, we explore nonlinear sufficient dimension reduction (SDR) methods, with a primary focus on establishing a foundational framework that integrates various nonlinear SDR methods. We illustrate the generalized sliced inverse regression (GSIR) and the generalized sliced average variance estimation (GSAVE) which are fitted by the framework. Further, we delve into nonlinear extensions of inverse moments through the kernel trick, specifically examining the kernel sliced inverse regression (KSIR) and kernel canonical correlation analysis (KCCA), and explore their relationships within the established framework. We also briefly explain the nonlinear SDR for functional data. In addition, we present practical aspects such as algorithmic implementations. This paper concludes with remarks on the dimensionality problem of the target function class.

중성자 방사화분석에 의한 한국자기의 분류

  • Gang, Hyeong-Tae;Lee, Cheol
    • 보존과학연구
    • /
    • s.6
    • /
    • pp.111-120
    • /
    • 1985
  • Data on the concentration of Na, K, Sc, Cr, Fe, Co, Cu, Ga, Rb, Cs, Ba, La,Ce, Sm, Eu, Tb, Lu, Hf, Ta and Th obtained by Neutron Activation Analysishave been used to characterise Korean porcelainsherds by multivariate analysis. The mathematical approaches employed is Principal Component Analysis(PCA).PCA was found to be helpful for dimensionality reduction and for obtaining information regarding (a) the number of independent causal variables required to account for the variability in the overall data set, (b) the extent to which agiven variable contributes to a component and(c) the number of causalvariables required to explain the total variability of each measured variable.

  • PDF

빅데이터 분석을 위한 Rank-Sparsity 기반 신호처리기법

  • Lee, Hyeok;Lee, Hyeong-Il;Jo, Jae-Hak;Kim, Min-Cheol;So, Byeong-Hyeon;Lee, Jeong-U
    • Information and Communications Magazine
    • /
    • v.31 no.11
    • /
    • pp.35-45
    • /
    • 2014
  • 주성분 분석 기법(PCA)는 가장 널리 사용되는 데이터 차원 감소 (dimensionality reduction) 기법으로 알려져 있다. 하지만 데이터에 이상점 (outlier)가 존재하는 환경에서는 성능이 크게 저하된다는 단점을 가지고 있다. Rank-Sparsity(Robust PCA) 기법은 주어진 행렬을 low-rank 행렬과 저밀도(sparse)행렬의 합으로 분해하는 방식으로, 이상점이 많은 환경에서 PCA기법을 효과적으로 대체할 수 있는 알고리즘으로 알려져 있다. 본 고에서는 RPCA 기법을 간략히 소개하고, 그의 적용분야, 및 알고리즘에 관한 연구들을 대해서 알아본다.

Comparative Study of Dimension Reduction Methods for Highly Imbalanced Overlapping Churn Data

  • Lee, Sujee;Koo, Bonhyo;Jung, Kyu-Hwan
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.454-462
    • /
    • 2014
  • Retention of possible churning customer is one of the most important issues in customer relationship management, so companies try to predict churn customers using their large-scale high-dimensional data. This study focuses on dealing with large data sets by reducing the dimensionality. By using six different dimension reduction methods-Principal Component Analysis (PCA), factor analysis (FA), locally linear embedding (LLE), local tangent space alignment (LTSA), locally preserving projections (LPP), and deep auto-encoder-our experiments apply each dimension reduction method to the training data, build a classification model using the mapped data and then measure the performance using hit rate to compare the dimension reduction methods. In the result, PCA shows good performance despite its simplicity, and the deep auto-encoder gives the best overall performance. These results can be explained by the characteristics of the churn prediction data that is highly correlated and overlapped over the classes. We also proposed a simple out-of-sample extension method for the nonlinear dimension reduction methods, LLE and LTSA, utilizing the characteristic of the data.