• Title/Summary/Keyword: Ensemble clustering

Search Result 37, Processing Time 0.03 seconds

Defect Severity-based Ensemble Model using FCM (FCM을 적용한 결함심각도 기반 앙상블 모델)

  • Lee, Na-Young;Kwon, Ki-Tae
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.681-686
    • /
    • 2016
  • Software defect prediction is an important factor in efficient project management and success. The severity of the defect usually determines the degree to which the project is affected. However, existing studies focus only on the presence or absence of a defect and not the severity of defect. In this study, we proposed an ensemble model using FCM based on defect severity. The severity of the defect of NASA data set's PC4 was reclassified. To select the input column that affected the severity of the defect, we extracted the important defect factor of the data set using Random Forest (RF). We evaluated the performance of the model by changing the parameters in the 10-fold cross-validation. The evaluation results were as follows. First, defect severities were reclassified from 58, 40, 80 to 30, 20, 128. Second, BRANCH_COUNT was an important input column for the degree of severity in terms of accuracy and node impurities. Third, smaller tree number led to more variables for good performance.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

SPOT/VEGETATION-based Algorithm for the Discrimination of Cloud and Snow (SPOT/VEGETATION 영상을 이용한 눈과 구름의 분류 알고리즘)

  • Han Kyung-Soo;Kim Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.235-244
    • /
    • 2004
  • This study focuses on the assessment for proposed algorithm to discriminate cloudy pixels from snowy pixels through use of visible, near infrared, and short wave infrared channel data in VEGETATION-1 sensor embarked on SPOT-4 satellite. Traditional threshold algorithms for cloud and snow masks did not show very good accuracy. Instead of these independent masking procedures, K-Means clustering scheme is employed for cloud/snow discrimination in this study. The pixels used in clustering were selected through an integration of two threshold algorithms, which group ensemble the snow and cloud pixels. This may give a opportunity to simplify the clustering procedure and to improve the accuracy as compared with full image clustering. This paper also compared the results with threshold methods of snow cover and clouds, and assesses discrimination capability in VEGETATION channels. The quality of the cloud and snow mask even more improved when present algorithm is implemented. The discrimination errors were considerably reduced by 19.4% and 9.7% for cloud mask and snow mask as compared with traditional methods, respectively.

Damaged cable detection with statistical analysis, clustering, and deep learning models

  • Son, Hyesook;Yoon, Chanyoung;Kim, Yejin;Jang, Yun;Tran, Linh Viet;Kim, Seung-Eock;Kim, Dong Joo;Park, Jongwoong
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.17-28
    • /
    • 2022
  • The cable component of cable-stayed bridges is gradually impacted by weather conditions, vehicle loads, and material corrosion. The stayed cable is a critical load-carrying part that closely affects the operational stability of a cable-stayed bridge. Damaged cables might lead to the bridge collapse due to their tension capacity reduction. Thus, it is necessary to develop structural health monitoring (SHM) techniques that accurately identify damaged cables. In this work, a combinational identification method of three efficient techniques, including statistical analysis, clustering, and neural network models, is proposed to detect the damaged cable in a cable-stayed bridge. The measured dataset from the bridge was initially preprocessed to remove the outlier channels. Then, the theory and application of each technique for damage detection were introduced. In general, the statistical approach extracts the parameters representing the damage within time series, and the clustering approach identifies the outliers from the data signals as damaged members, while the deep learning approach uses the nonlinear data dependencies in SHM for the training model. The performance of these approaches in classifying the damaged cable was assessed, and the combinational identification method was obtained using the voting ensemble. Finally, the combination method was compared with an existing outlier detection algorithm, support vector machines (SVM). The results demonstrate that the proposed method is robust and provides higher accuracy for the damaged cable detection in the cable-stayed bridge.

Mapping of Education Quality and E-Learning Readiness to Enhance Economic Growth in Indonesia

  • PRAMANA, Setia;ASTUTI, Erni Tri
    • Asian Journal of Business Environment
    • /
    • v.12 no.1
    • /
    • pp.11-16
    • /
    • 2022
  • Purpose: This study is aimed to map the provinces in Indonesia based on the education and ICT indicators using several unsupervised learning algorithms. Research design, data, and methodology: The education and ICT indicators such as student-teacher ratio, illiteracy rate, net enrolment ratio, internet access, computer ownership, are used. Several approaches to get deeper understanding on provincial strength and weakness based on these indicators are implemented. The approaches are Ensemble K-Mean and Fuzzy C Means clustering. Results: There are at least three clusters observed in Indonesia the education quality, participation, facilities and ICT Access. Cluster with high education quality and ICT access are consist of DKI Jakarta, Yogyakarta, Riau Islands, East Kalimantan and Bali. These provinces show rapid economic growth. Meanwhile the other cluster consisting of six provinces (NTT, West Kalimantan, Central Sulawesi, West Sulawesi, North Maluku, and Papua) are the cluster with lower education quality and ICT development which impact their economic growth. Conclusions: The provinces in Indonesia are clustered into three group based on the education attainment and ICT indicators. Some provinces can directly implement e-learning; however, more provinces need to improve the education quality and facilities as well as the ICT infrastructure before implementing the e-learning.

Improving the Performance of Deep-Learning-Based Ground-Penetrating Radar Cavity Detection Model using Data Augmentation and Ensemble Techniques (데이터 증강 및 앙상블 기법을 이용한 딥러닝 기반 GPR 공동 탐지 모델 성능 향상 연구)

  • Yonguk Choi;Sangjin Seo;Hangilro Jang;Daeung Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.211-228
    • /
    • 2023
  • Ground-penetrating radar (GPR) surveys are commonly used to monitor embankments, which is a nondestructive geophysical method. The results of GPR surveys can be complex, depending on the situation, and data processing and interpretation are subject to expert experiences, potentially resulting in false detection. Additionally, this process is time-intensive. Consequently, various studies have been undertaken to detect cavities in GPR survey data using deep learning methods. Deep-learning-based approaches require abundant data for training, but GPR field survey data are often scarce due to cost and other factors constaining field studies. Therefore, in this study, a deep- learning-based model was developed for embankment GPR survey cavity detection using data augmentation strategies. A dataset was constructed by collecting survey data over several years from the same embankment. A you look only once (YOLO) model, commonly used in computer vision for object detection, was employed for this purpose. By comparing and analyzing various strategies, the optimal data augmentation approach was determined. After initial model development, a stepwise process was employed, including box clustering, transfer learning, self-ensemble, and model ensemble techniques, to enhance the final model performance. The model performance was evaluated, with the results demonstrating its effectiveness in detecting cavities in embankment GPR survey data.

Data Fusion, Ensemble and Clustering for the Severity Classification of Road Traffic Accident in Korea (데이터융합, 앙상블과 클러스터링을 이용한 교통사고 심각도 분류분석)

  • 손소영;이성호
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.597-600
    • /
    • 2000
  • 계속적인 증가 추세를 보이고 있는 교통량으로 인해 환경 문제뿐 아니라 교통사고로 인한 사상자 및 물적피해가 상당량으로 집계되고 있다. 본 논문에서는 데이터융합 및 앙상블 클러스터링방법을 이용한 교통사고 심각도 분류분석방법을 제안함으로서 교통사고예방에 기여하고자 한다. 이를 위하여 신경망과 Decision-Tree기법을 이용하여 얻은 물적피해와 신체상해가 발생할 확률을 융합하는 전형적인 데이터 융합기법(템스터-쉐퍼, 베이지안 방법, 로지스틱융합방법)을 사용하였다. 또한, 분류정확도를 향상시키고자 Bootstrap 재추출 방법을 이용해 얻어진 여러 개의 분류예측 결과 중 다수의 분류결과를 선택하는 앙상블 (arcing, bagging)기법을 적용하였다. 더불어, 본 연구에서는 클러스터링 방법을 제시하고, 이 방법이 기존의 융합기법, 앙상블기법과 비교한 결과, 분류예측면에서 정확도가 향상됨을 보였다.

  • PDF

Human Action Recognition in Still Image Using Weighted Bag-of-Features and Ensemble Decision Trees (가중치 기반 Bag-of-Feature와 앙상블 결정 트리를 이용한 정지 영상에서의 인간 행동 인식)

  • Hong, June-Hyeok;Ko, Byoung-Chul;Nam, Jae-Yeal
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.1
    • /
    • pp.1-9
    • /
    • 2013
  • This paper propose a human action recognition method that uses bag-of-features (BoF) based on CS-LBP (center-symmetric local binary pattern) and a spatial pyramid in addition to the random forest classifier. To construct the BoF, an image divided into dense regular grids and extract from each patch. A code word which is a visual vocabulary, is formed by k-means clustering of a random subset of patches. For enhanced action discrimination, local BoF histogram from three subdivided levels of a spatial pyramid is estimated, and a weighted BoF histogram is generated by concatenating the local histograms. For action classification, a random forest, which is an ensemble of decision trees, is built to model the distribution of each action class. The random forest combined with the weighted BoF histogram is successfully applied to Standford Action 40 including various human action images, and its classification performance is better than that of other methods. Furthermore, the proposed method allows action recognition to be performed in near real-time.

Credit Card Bad Debt Prediction Model based on Support Vector Machine (신용카드 대손회원 예측을 위한 SVM 모형)

  • Kim, Jin Woo;Jhee, Won Chul
    • Journal of Information Technology Services
    • /
    • v.11 no.4
    • /
    • pp.233-250
    • /
    • 2012
  • In this paper, credit card delinquency means the possibility of occurring bad debt within the certain near future from the normal accounts that have no debt and the problem is to predict, on the monthly basis, the occurrence of delinquency 3 months in advance. This prediction is typical binary classification problem but suffers from the issue of data imbalance that means the instances of target class is very few. For the effective prediction of bad debt occurrence, Support Vector Machine (SVM) with kernel trick is adopted using credit card usage and payment patterns as its inputs. SVM is widely accepted in the data mining society because of its prediction accuracy and no fear of overfitting. However, it is known that SVM has the limitation in its ability to processing the large-scale data. To resolve the difficulties in applying SVM to bad debt occurrence prediction, two stage clustering is suggested as an effective data reduction method and ensembles of SVM models are also adopted to mitigate the difficulty due to data imbalance intrinsic to the target problem of this paper. In the experiments with the real world data from one of the major domestic credit card companies, the suggested approach reveals the superior prediction accuracy to the traditional data mining approaches that use neural networks, decision trees or logistics regressions. SVM ensemble model learned from T2 training set shows the best prediction results among the alternatives considered and it is noteworthy that the performance of neural networks with T2 is better than that of SVM with T1. These results prove that the suggested approach is very effective for both SVM training and the classification problem of data imbalance.

Bagged Auto-Associative Kernel Regression-Based Fault Detection and Identification Approach for Steam Boilers in Thermal Power Plants

  • Yu, Jungwon;Jang, Jaeyel;Yoo, Jaeyeong;Park, June Ho;Kim, Sungshin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1406-1416
    • /
    • 2017
  • In complex and large-scale industries, properly designed fault detection and identification (FDI) systems considerably improve safety, reliability and availability of target processes. In thermal power plants (TPPs), generating units operate under very dangerous conditions; system failures can cause severe loss of life and property. In this paper, we propose a bagged auto-associative kernel regression (AAKR)-based FDI approach for steam boilers in TPPs. AAKR estimates new query vectors by online local modeling, and is suitable for TPPs operating under various load levels. By combining the bagging method, more stable and reliable estimations can be achieved, since the effects of random fluctuations decrease because of ensemble averaging. To validate performance, the proposed method and comparison methods (i.e., a clustering-based method and principal component analysis) are applied to failure data due to water wall tube leakage gathered from a 250 MW coal-fired TPP. Experimental results show that the proposed method fulfills reasonable false alarm rates and, at the same time, achieves better fault detection performance than the comparison methods. After performing fault detection, contribution analysis is carried out to identify fault variables; this helps operators to confirm the types of faults and efficiently take preventive actions.