• 제목/요약/키워드: feature importance

검색결과 422건 처리시간 0.029초

특성중요도를 활용한 분류나무의 입력특성 선택효과 : 신용카드 고객이탈 사례 (Feature Selection Effect of Classification Tree Using Feature Importance : Case of Credit Card Customer Churn Prediction)

  • 윤한성
    • 디지털산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.1-10
    • /
    • 2024
  • For the purpose of predicting credit card customer churn accurately through data analysis, a model can be constructed with various machine learning algorithms, including decision tree. And feature importance has been utilized in selecting better input features that can improve performance of data analysis models for several application areas. In this paper, a method of utilizing feature importance calculated from the MDI method and its effects are investigated in the credit card customer churn prediction problem with classification trees. Compared with several random feature selections from case data, a set of input features selected from higher value of feature importance shows higher predictive power. It can be an efficient method for classifying and choosing input features necessary for improving prediction performance. The method organized in this paper can be an alternative to the selection of input features using feature importance in composing and using classification trees, including credit card customer churn prediction.

Performance Evaluation of a Feature-Importance-based Feature Selection Method for Time Series Prediction

  • Hyun, Ahn
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.82-89
    • /
    • 2023
  • Various machine-learning models may yield high predictive power for massive time series for time series prediction. However, these models are prone to instability in terms of computational cost because of the high dimensionality of the feature space and nonoptimized hyperparameter settings. Considering the potential risk that model training with a high-dimensional feature set can be time-consuming, we evaluate a feature-importance-based feature selection method to derive a tradeoff between predictive power and computational cost for time series prediction. We used two machine learning techniques for performance evaluation to generate prediction models from a retail sales dataset. First, we ranked the features using impurity- and Local Interpretable Model-agnostic Explanations (LIME) -based feature importance measures in the prediction models. Then, the recursive feature elimination method was applied to eliminate unimportant features sequentially. Consequently, we obtained a subset of features that could lead to reduced model training time while preserving acceptable model performance.

PCA 및 변수 중요도를 활용한 냉동컨테이너 고장 탐지 방법론 비교 연구 (A Comparative Study on the Methodology of Failure Detection of Reefer Containers Using PCA and Feature Importance)

  • 이승현;박성호;이승재;이희원;유성열;이강배
    • 한국융합학회논문지
    • /
    • 제13권3호
    • /
    • pp.23-31
    • /
    • 2022
  • 본 연구는 H해운사에서 제공받은 Starcool사의 실제 냉동 컨테이너 운영데이터를 분석하였다. H사의 현장 전문가와 인터뷰를 통해 4가지 고장 알람 중 Critical 및 Fatal Alarm만 고장으로 정의하였고, 냉동 컨테이너 특성상 모든 변수를 사용하는 것은 비용측면에서 비효율을 초래하는 것을 확인하였다. 이에 본 연구는 특성 중요도 및 PCA 기법을 통한 냉동 컨테이너 고장 탐지 방법을 제시한다. 모델의 성능 향상을 위해 XGBoost, LGBoost 등과 같은 트리계열 모델을 통해 변수 중요도(Feature Importance)를 기반으로 변수 선택(Feature selcetion)을 하고 선택되지 않은 변수는 PCA를 사용하여 전체 변수의 차원을 축소시켜 각 모델별로 지도학습을 수행한다. 부스팅 기반의 XGBoost, LGBoost 기법은 본 연구에서 제안하는 모델의 결과가 62개의 모든 변수를 사용한 지도 학습의 결과보다 재현율(Recall)이 각각 0.36, 0.39씩 향상되는 되는 결과를 보였다.

전략적 중요도를 고려한 연관규칙 탐사 (Association Rule Mining Considering Strategic Importance)

  • 최덕원;신진규
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 춘계학술발표대회
    • /
    • pp.443-446
    • /
    • 2007
  • A new association rule mining algorithm, which reflects the strategic importance of associative relationships between items, was developed and presented in this paper. This algorithm exploits the basic framework of Apriori procedures and TSAA(transitive support association Apriori) procedure developed by Hyun and Choi in evaluating non-frequent itemsets. The algorithm considers the strategic importance(weight) of feature variables in the association rule mining process. Sample feature variables of strategic importance include: profitability, marketing value, customer satisfaction, and frequency. A database with 730 transaction data set of a large scale discount store was used to compare and verify the performance of the presented algorithm against the existing Apriori and TSAA algorithms. The result clearly indicated that the new algorithm produced substantially different association itemsets according to the weights assigned to the strategic feature variables.

  • PDF

Stacked Autoencoder 기반 악성코드 Feature 정제 기술 연구 (Stacked Autoencoder Based Malware Feature Refinement Technology Research)

  • 김홍비;이태진
    • 정보보호학회논문지
    • /
    • 제30권4호
    • /
    • pp.593-603
    • /
    • 2020
  • 네트워크의 발전에 따라 악성코드 생성도구가 유포되는 등으로 인해 악성코드의 출현이 기하급수적으로 증가하였으나 기존의 악성코드 탐지 방법을 통한 대응에는 한계가 존재한다. 이러한 상황에 따라 머신러닝 기반의 악성 코드탐지 방법이 발전하는 추세이며, 본 논문에서는 머신러닝 기반의 악성 코드 탐지를 위해 PE 헤더에서 데이터의 feature를 추출한 후 이를 이용하여 autoencoder를 통해 악성코드를 더 잘 나타내는 feature 및 feature importance를 추출하는 방법에 대한 연구를 진행한다. 본 논문은 악성코드 분석에서 범용적으로 사용되는 PE 파일에서 확인 가능한 DLL/API 등의 정보로 구성된 549개의 feature를 추출하였고 머신러닝의 악성코드 탐지 성능향상을 위해 추출된 feature를 이용하여 autoencoder를 통해 데이터를 압축적으로 저장함으로써 데이터의 feature를 효과적으로 추출해 우수한 정확도 제공 및 처리 시간을 2배 단축에 성공적임을 증명하였다. 시험 결과는 악성코드 그룹 분류에도 유용함을 보였으며, 향후 SVM과 같은 분류기를 도입하여 더욱 정확한 악성코드 탐지를 위한 연구를 이어갈 예정이다.

생태계 모방 알고리즘 기반 특징 선택 방법의 성능 개선 방안 (Performance Improvement of Feature Selection Methods based on Bio-Inspired Algorithms)

  • 윤철민;양지훈
    • 정보처리학회논문지B
    • /
    • 제15B권4호
    • /
    • pp.331-340
    • /
    • 2008
  • 특징 선택은 기계 학습에서 분류의 성능을 높이기 위해 사용되는 방법이다. 여러 방법들이 개발되고 사용되어 오고 있으나, 전체 데이터에서 최적화된 특징 부분집합을 구성하는 문제는 여전히 어려운 문제로 남아있다. 생태계 모방 알고리즘은 생물체들의 행동 원리 등을 기반으로하여 만들어진 진화적 알고리즘으로, 최적화된 해를 찾는 문제에서 매우 유용하게 사용되는 방법이다. 특징 선택 문제에서도 생태계 모방 알고리즘을 이용한 해결방법들이 제시되어 오고 있으며, 이에 본 논문에서는 생태계 모방 알고리즘을 이용한 특징 선택 방법을 개선하는 방안을 제시한다. 이를 위해 잘 알려진 생태계 모방 알고리즘인 유전자 알고리즘(GA)과 파티클 집단 최적화 알고리즘(PSO)을 이용하여 데이터에서 가장분류 성능이 우수한 특징 부분집합을 만들어 내도록 하고, 최종적으로 개별 특징의 사전 중요도를 설정하여 생태계 모방 알고리즘을 개선하는 방법을 제안하였다. 이를 위해 개별 특징의 우수도를 구할 수 있는 mRMR이라는 방법을 이용하였다. 이렇게 설정한 사전 중요도를 이용하여 GA와 PSO의 진화 연산을 수정하였다. 데이터를 이용한 실험을 통하여 제안한 방법들의 성능을 검증하였다. GA와 PSO를 이용한 특징 선택 방법은 그 분류 정확도에 있어서 뛰어난 성능을 보여주었다. 그리고 최종적으로 제시한 사전 중요도를 이용해 개선된 방법은 그 진화 속도와 분류 정확도 면에서 기존의 GA와 PSO 방법보다 더 나아진 성능을 보여주는 것을 확인하였다.

머신러닝과 딥러닝을 이용한 영산강의 Chlorophyll-a 예측 성능 비교 및 변화 요인 분석 (Comparison of Chlorophyll-a Prediction and Analysis of Influential Factors in Yeongsan River Using Machine Learning and Deep Learning)

  • 심선희;김유흔;이혜원;김민;최정현
    • 한국물환경학회지
    • /
    • 제38권6호
    • /
    • pp.292-305
    • /
    • 2022
  • The Yeongsan River, one of the four largest rivers in South Korea, has been facing difficulties with water quality management with respect to algal bloom. The algal bloom menace has become bigger, especially after the construction of two weirs in the mainstream of the Yeongsan River. Therefore, the prediction and factor analysis of Chlorophyll-a (Chl-a) concentration is needed for effective water quality management. In this study, Chl-a prediction model was developed, and the performance evaluated using machine and deep learning methods, such as Deep Neural Network (DNN), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). Moreover, the correlation analysis and the feature importance results were compared to identify the major factors affecting the concentration of Chl-a. All models showed high prediction performance with an R2 value of 0.9 or higher. In particular, XGBoost showed the highest prediction accuracy of 0.95 in the test data.The results of feature importance suggested that Ammonia (NH3-N) and Phosphate (PO4-P) were common major factors for the three models to manage Chl-a concentration. From the results, it was confirmed that three machine learning methods, DNN, RF, and XGBoost are powerful methods for predicting water quality parameters. Also, the comparison between feature importance and correlation analysis would present a more accurate assessment of the important major factors.

의료진단 및 중요 검사 항목 결정 지원 시스템을 위한 랜덤 포레스트 알고리즘 적용 (Application of Random Forest Algorithm for the Decision Support System of Medical Diagnosis with the Selection of Significant Clinical Test)

  • 윤태균;이관수
    • 전기학회논문지
    • /
    • 제57권6호
    • /
    • pp.1058-1062
    • /
    • 2008
  • In clinical decision support system(CDSS), unlike rule-based expert method, appropriate data-driven machine learning method can easily provide the information of individual feature(clinical test) for disease classification. However, currently developed methods focus on the improvement of the classification accuracy for diagnosis. With the analysis of feature importance in classification, one may infer the novel clinical test sets which highly differentiate the specific diseases or disease states. In this background, we introduce a novel CDSS that integrate a classifier and feature selection module together. Random forest algorithm is applied for the classifier and the feature importance measure. The system selects the significant clinical tests discriminating the diseases by examining the classification error during backward elimination of the features. The superior performance of random forest algorithm in clinical classification was assessed against artificial neural network and decision tree algorithm by using breast cancer, diabetes and heart disease data in UCI Machine Learning Repository. The test with the same data sets shows that the proposed system can successfully select the significant clinical test set for each disease.

앙상블 기계학습 모델을 이용한 비정질 소재의 자기냉각 효과 및 전이온도 예측 (Prediction of Transition Temperature and Magnetocaloric Effects in Bulk Metallic Glasses with Ensemble Models)

  • 남충희
    • 한국재료학회지
    • /
    • 제34권7호
    • /
    • pp.363-369
    • /
    • 2024
  • In this study, the magnetocaloric effect and transition temperature of bulk metallic glass, an amorphous material, were predicted through machine learning based on the composition features. From the Python module 'Matminer', 174 compositional features were obtained, and prediction performance was compared while reducing the composition features to prevent overfitting. After optimization using RandomForest, an ensemble model, changes in prediction performance were analyzed according to the number of compositional features. The R2 score was used as a performance metric in the regression prediction, and the best prediction performance was found using only 90 features predicting transition temperature, and 20 features predicting magnetocaloric effects. The most important feature when predicting magnetocaloric effects was the 'Fe' compositional ratio. The feature importance method provided by 'scikit-learn' was applied to sort compositional features. The feature importance method was found to be appropriate by comparing the prediction performance of the Fe-contained dataset with the full dataset.

Feature 저장소 기술 동향 (A Survey on Feature Store)

  • 허성진;김지용
    • 전자통신동향분석
    • /
    • 제36권2호
    • /
    • pp.65-74
    • /
    • 2021
  • In this paper, we discussed the necessity and importance of introducing feature stores to establish a collaborative environment between data engineering work and data science work. We examined the technology trends of feature stores by analyzing the status of some major feature stores. Moreover, by introducing a feature store, we can reduce the cost of performing artificial intelligence (AI) projects and improve the performance and reliability of AI models and the convenience of model operation. The future task is to establish technical requirements for establishing a collaborative environment between data engineering work and data science work and develop a solution for providing a collaborative environment based on this.