• Title/Summary/Keyword: weighted ensemble

Search Result 35, Processing Time 0.028 seconds

A Comparison Study of Ensemble Approach Using WRF/CMAQ Model - The High PM10 Episode in Busan (앙상블 방법에 따른 WRF/CMAQ 수치 모의 결과 비교 연구 - 2013년 부산지역 고농도 PM10 사례)

  • Kim, Taehee;Kim, Yoo-Keun;Shon, Zang-Ho;Jeong, Ju-Hee
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.32 no.5
    • /
    • pp.513-525
    • /
    • 2016
  • To propose an effective ensemble methods in predicting $PM_{10}$ concentration, six experiments were designed by different ensemble average methods (e.g., non-weighted, single weighted, and cluster weighted methods). The single weighted method was calculated the weighted value using both multiple regression analysis and singular value decomposition and the cluster weighted method was estimated the weighted value based on temperature, relative humidity, and wind component using multiple regression analysis. The effects of ensemble average methods were significantly better in weighted average than non-weight. The results of ensemble experiments using weighted average methods were distinguished according to methods calculating the weighted value. The single weighted average method using multiple regression analysis showed the highest accuracy for hourly $PM_{10}$ concentration, and the cluster weighted average method based on relative humidity showed the highest accuracy for daily mean $PM_{10}$ concentration. However, the result of ensemble spread analysis showed better reliability in the single weighted average method than the cluster weighted average method based on relative humidity. Thus, the single weighted average method was the most effective method in this study case.

CNN-based Weighted Ensemble Technique for ImageNet Classification (대용량 이미지넷 인식을 위한 CNN 기반 Weighted 앙상블 기법)

  • Jung, Heechul;Choi, Min-Kook;Kim, Junkwang;Kwon, Soon;Jung, Wooyoung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.4
    • /
    • pp.197-204
    • /
    • 2020
  • The ImageNet dataset is a large scale dataset and contains various natural scene images. In this paper, we propose a convolutional neural network (CNN)-based weighted ensemble technique for the ImageNet classification task. First, in order to fuse several models, our technique uses weights for each model, unlike the existing average-based ensemble technique. Then we propose an algorithm that automatically finds the coefficients used in later ensemble process. Our algorithm sequentially selects the model with the best performance of the validation set, and then obtains a weight that improves performance when combined with existing selected models. We applied the proposed algorithm to a total of 13 heterogeneous models, and as a result, 5 models were selected. These selected models were combined with weights, and we achieved 3.297% Top-5 error rate on the ImageNet test dataset.

Wind Prediction with a Short-range Multi-Model Ensemble System (단시간 다중모델 앙상블 바람 예측)

  • Yoon, Ji Won;Lee, Yong Hee;Lee, Hee Choon;Ha, Jong-Chul;Lee, Hee Sang;Chang, Dong-Eon
    • Atmosphere
    • /
    • v.17 no.4
    • /
    • pp.327-337
    • /
    • 2007
  • In this study, we examined the new ensemble training approach to reduce the systematic error and improve prediction skill of wind by using the Short-range Ensemble prediction system (SENSE), which is the mesoscale multi-model ensemble prediction system. The SENSE has 16 ensemble members based on the MM5, WRF ARW, and WRF NMM. We evaluated the skill of surface wind prediction compared with AWS (Automatic Weather Station) observation during the summer season (June - August, 2006). At first stage, the correction of initial state for each member was performed with respect to the observed values, and the corrected members get the training stage to find out an adaptive weight function, which is formulated by Root Mean Square Vector Error (RMSVE). It was found that the optimal training period was 1-day through the experiments of sensitivity to the training interval. We obtained the weighted ensemble average which reveals smaller errors of the spatial and temporal pattern of wind speed than those of the simple ensemble average.

A Combination and Calibration of Multi-Model Ensemble of PyeongChang Area Using Ensemble Model Output Statistics (Ensemble Model Output Statistics를 이용한 평창지역 다중 모델 앙상블 결합 및 보정)

  • Hwang, Yuseon;Kim, Chansoo
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.247-261
    • /
    • 2018
  • The objective of this paper is to compare probabilistic temperature forecasts from different regional and global ensemble prediction systems over PyeongChang area. A statistical post-processing method is used to take into account combination and calibration of forecasts from different numerical prediction systems, laying greater weight on ensemble model that exhibits the best performance. Observations for temperature were obtained from the 30 stations in PyeongChang and three different ensemble forecasts derived from the European Centre for Medium-Range Weather Forecasts, Ensemble Prediction System for Global and Limited Area Ensemble Prediction System that were obtained between 1 May 2014 and 18 March 2017. Prior to applying to the post-processing methods, reliability analysis was conducted to identify the statistical consistency of ensemble forecasts and corresponding observations. Then, ensemble model output statistics and bias-corrected methods were applied to each raw ensemble model and then proposed weighted combination of ensembles. The results showed that the proposed methods provide improved performances than raw ensemble mean. In particular, multi-model forecast based on ensemble model output statistics was superior to the bias-corrected forecast in terms of deterministic prediction.

Potential Impact of Climate Change on Distribution of Hedera rhombea in the Korean Peninsula (기후변화에 따른 송악의 잠재서식지 분포 변화 예측)

  • Park, Seon Uk;Koo, Kyung Ah;Seo, Changwan;Kong, Woo-Seok
    • Journal of Climate Change Research
    • /
    • v.7 no.3
    • /
    • pp.325-334
    • /
    • 2016
  • We projected the distribution of Hedera rhombea, an evergreen broad-leaved climbing plant, under current climate conditions and predicted its future distributions under global warming. Inaddition, weexplained model uncertainty by employing 9 single Species Distribution model (SDM)s to model the distribution of Hedera rhombea. 9 single SDMs were constructed with 736 presence/absence data and 3 temperature and 3 precipitation data. Uncertainty of each SDM was assessed with TSS (Ture Skill Statistics) and AUC (the Area under the curve) value of ROC (receiver operating characteristic) analyses. To reduce model uncertainty, we combined 9 single SDMs weighted by TSS and resulted in an ensemble forecast, a TSS weighted ensemble. We predicted future distributions of Hedera rhombea under future climate conditions for the period of 2050 (2040~2060), which were estimated with HadGEM2-AO. RF (Random Forest), GBM (Generalized Boosted Model) and TSS weighted ensemble model showed higher prediction accuracies (AUC > 0.95, TSS > 0.80) than other SDMs. Based on the projections of TSS weighted ensemble, potential habitats under current climate conditions showed a discrepancy with actual habitats, especially in the northern distribution limit. The observed northern boundary of Hedera rhombea is Ulsan in the eastern Korean Peninsula, but the projected limit was eastern coast of Gangwon province. Geomorphological conditions and the dispersal limitations mediated by birds, the lack of bird habitats at eastern coast of Gangwon Province, account for such discrepancy. In general, potential habitats of Hedera rhombea expanded under future climate conditions, but the extent of expansions depend on RCP scenarios. Potential Habitat of Hedera rhombea expanded into Jeolla-inland area under RCP 4.5, and into Chungnam and Wonsan under RCP 8.5. Our results would be fundamental information for understanding the potential effects of climate change on the distribution of Hedera rhombea.

A New Incremental Learning Algorithm with Probabilistic Weights Using Extended Data Expression

  • Yang, Kwangmo;Kolesnikova, Anastasiya;Lee, Won Don
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.258-267
    • /
    • 2013
  • New incremental learning algorithm using extended data expression, based on probabilistic compounding, is presented in this paper. Incremental learning algorithm generates an ensemble of weak classifiers and compounds these classifiers to a strong classifier, using a weighted majority voting, to improve classification performance. We introduce new probabilistic weighted majority voting founded on extended data expression. In this case class distribution of the output is used to compound classifiers. UChoo, a decision tree classifier for extended data expression, is used as a base classifier, as it allows obtaining extended output expression that defines class distribution of the output. Extended data expression and UChoo classifier are powerful techniques in classification and rule refinement problem. In this paper extended data expression is applied to obtain probabilistic results with probabilistic majority voting. To show performance advantages, new algorithm is compared with Learn++, an incremental ensemble-based algorithm.

A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm (앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구)

  • Park, Sung-Wook;Kim, Jong-Chan;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.

Ensemble learning of Regional Experts (지역 전문가의 앙상블 학습)

  • Lee, Byung-Woo;Yang, Ji-Hoon;Kim, Seon-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.135-139
    • /
    • 2009
  • We present a new ensemble learning method that employs the set of region experts, each of which learns to handle a subset of the training data. We split the training data and generate experts for different regions in the feature space. When classifying a data, we apply a weighted voting among the experts that include the data in their region. We used ten datasets to compare the performance of our new ensemble method with that of single classifiers as well as other ensemble methods such as Bagging and Adaboost. We used SMO, Naive Bayes and C4.5 as base learning algorithms. As a result, we found that the performance of our method is comparable to that of Adaboost and Bagging when the base learner is C4.5. In the remaining cases, our method outperformed the benchmark methods.

Ensemble of Fuzzy Decision Tree for Efficient Indoor Space Recognition

  • Kim, Kisang;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.33-39
    • /
    • 2017
  • In this paper, we expand the process of classification to an ensemble of fuzzy decision tree. For indoor space recognition, many research use Boosted Tree, consists of Adaboost and decision tree. The Boosted Tree extracts an optimal decision tree in stages. On each stage, Boosted Tree extracts the good decision tree by minimizing the weighted error of classification. This decision tree performs a hard decision. In most case, hard decision offer some error when they classify nearby a dividing point. Therefore, We suggest an ensemble of fuzzy decision tree, which offer some flexibility to the Boosted Tree algorithm as well as a high performance. In experimental results, we evaluate that the accuracy of suggested methods improved about 13% than the traditional one.

Ensemble of Classifiers Constructed on Class-Oriented Attribute Reduction

  • Li, Min;Deng, Shaobo;Wang, Lei
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.360-376
    • /
    • 2020
  • Many heuristic attribute reduction algorithms have been proposed to find a single reduct that functions as the entire set of original attributes without loss of classification capability; however, the proposed reducts are not always perfect for these multiclass datasets. In this study, based on a probabilistic rough set model, we propose the class-oriented attribute reduction (COAR) algorithm, which separately finds a reduct for each target class. Thus, there is a strong dependence between a reduct and its target class. Consequently, we propose a type of ensemble constructed on a group of classifiers based on class-oriented reducts with a customized weighted majority voting strategy. We evaluated the performance of our proposed algorithm based on five real multiclass datasets. Experimental results confirm the superiority of the proposed method in terms of four general evaluation metrics.