• 제목/요약/키워드: Learning/Training Algorithms

검색결과 432건 처리시간 0.026초

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • 전기전자학회논문지
    • /
    • 제22권4호
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • 한국품질경영학회:학술대회논문집
    • /
    • 한국품질경영학회 1998년도 The 12th Asia Quality Management Symposium* Total Quality Management for Restoring Competitiveness
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

광학 영상의 구름 제거를 위한 기계학습 알고리즘의 예측 성능 평가: 농경지 사례 연구 (Performance Evaluation of Machine Learning Algorithms for Cloud Removal of Optical Imagery: A Case Study in Cropland)

  • 박소연;곽근호;안호용;박노욱
    • 대한원격탐사학회지
    • /
    • 제39권5_1호
    • /
    • pp.507-519
    • /
    • 2023
  • Multi-temporal optical images have been utilized for time-series monitoring of croplands. However, the presence of clouds imposes limitations on image availability, often requiring a cloud removal procedure. This study assesses the applicability of various machine learning algorithms for effective cloud removal in optical imagery. We conducted comparative experiments by focusing on two key variables that significantly influence the predictive performance of machine learning algorithms: (1) land-cover types of training data and (2) temporal variability of land-cover types. Three machine learning algorithms, including Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF), were employed for the experiments using simulated cloudy images in paddy fields of Gunsan. GPR and SVM exhibited superior prediction accuracy when the training data had the same land-cover types as the cloud region, and GPR showed the best stability with respect to sampling fluctuations. In addition, RF was the least affected by the land-cover types and temporal variations of training data. These results indicate that GPR is recommended when the land-cover type and spectral characteristics of the training data are the same as those of the cloud region. On the other hand, RF should be applied when it is difficult to obtain training data with the same land-cover types as the cloud region. Therefore, the land-cover types in cloud areas should be taken into account for extracting informative training data along with selecting the optimal machine learning algorithm.

ON LEARNING OF CNAC FOR MANIPULATOR CONTROL

  • Hwang, Heon;Choi, Dong-Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1989년도 한국자동제어학술회의논문집; Seoul, Korea; 27-28 Oct. 1989
    • /
    • pp.653-662
    • /
    • 1989
  • Cerebellar Model Arithmetic Controller (CMAC) has been introduced as an adaptive control function generator. CMAC computes control functions referring to a distributed memory table storing functional values rather than by solving equations analytically or numerically. CMAC has a unique mapping structure as a coarse coding and supervisory delta-rule learning property. In this paper, learning aspects and a convergence of the CMAC were investigated. The efficient training algorithms were developed to overcome the limitations caused by the conventional maximum error correction training and to eliminate the accumulated learning error caused by a sequential node training. A nonlinear function generator and a motion generator for a two d.o.f. manipulator were simulated. The efficiency of the various learning algorithms was demonstrated through the cpu time used and the convergence of the rms and maximum errors accumulated during a learning process. A generalization property and a learning effect due to the various gains were simulated. A uniform quantizing method was applied to cope with various ranges of input variables efficiently.

  • PDF

ON LEARNING OF CMAC FOR MANIPULATOR CONTROL

  • 최동엽;황현
    • 한국기계연구소 소보
    • /
    • 통권19호
    • /
    • pp.93-115
    • /
    • 1989
  • Cerebellar Model Arithmetic Controller(CMAC) has been introduced as an adaptive control function generator. CMAC computes control functions referring to a distributed memory table storing functional values rather than by solving equations analytically or numerically. CMAC has a unique mapping structure as a coarse coding and supervisory delta-rule learning property. In this paper, learning aspects and a convergence of the CMAC were investigated. The efficient training algorithms were developed to overcome the limitations caused by the conventional maximum error correction training and to eliminate the accumulated learning error caused by a sequential node training. A nonlinear function generator and a motion generator for a two d. o. f. manipulator were simulated. The efficiency of the various learning algorithms was demonstrated through the cpu time used and the convergence of the rms and maximum errors accumulated during a learning process; A generalization property and a learning effect due to the various gains were simulated. A uniform quantizing method was applied to cope with various ranges of input variables efficiently.

  • PDF

유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택 (Optimal Selection of Classifier Ensemble Using Genetic Algorithms)

  • 김명종
    • 지능정보연구
    • /
    • 제16권4호
    • /
    • pp.99-112
    • /
    • 2010
  • 앙상블 학습은 분류 및 예측 알고리즘의 성과개선을 위하여 제안된 기계학습 기법이다. 그러나 앙상블 학습은 기저 분류자의 다양성이 부족한 경우 다중공선성 문제로 인하여 성과개선 효과가 미약하고 심지어는 성과가 악화될 수 있다는 문제점이 제기되었다. 본 연구에서는 기저 분류자의 다양성을 확보하고 앙상블 학습의 성과개선 효과를 제고하기 위하여 유전자 알고리즘 기반의 범위 최적화 기법을 제안하고자 한다. 본 연구에서 제안된 최적화 기법을 기업 부실예측 인공신경망 앙상블에 적용한 결과 기저 분류자의 다양성이 확보되고 인공신경망 앙상블의 성과가 유의적으로 개선되었음을 보여주었다.

A Comparison of Meta-learning and Transfer-learning for Few-shot Jamming Signal Classification

  • Jin, Mi-Hyun;Koo, Ddeo-Ol-Ra;Kim, Kang-Suk
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제11권3호
    • /
    • pp.163-172
    • /
    • 2022
  • Typical anti-jamming technologies based on array antennas, Space Time Adaptive Process (STAP) & Space Frequency Adaptive Process (SFAP), are very effective algorithms to perform nulling and beamforming. However, it does not perform equally well for all types of jamming signals. If the anti-jamming algorithm is not optimized for each signal type, anti-jamming performance deteriorates and the operation stability of the system become worse by unnecessary computation. Therefore, jamming classification technique is required to obtain optimal anti-jamming performance. Machine learning, which has recently been in the spotlight, can be considered to classify jamming signal. In general, performing supervised learning for classification requires a huge amount of data and new learning for unfamiliar signal. In the case of jamming signal classification, it is difficult to obtain large amount of data because outdoor jamming signal reception environment is difficult to configure and the signal type of attacker is unknown. Therefore, this paper proposes few-shot jamming signal classification technique using meta-learning and transfer-learning to train the model using a small amount of data. A training dataset is constructed by anti-jamming algorithm input data within the GNSS receiver when jamming signals are applied. For meta-learning, Model-Agnostic Meta-Learning (MAML) algorithm with a general Convolution Neural Networks (CNN) model is used, and the same CNN model is used for transfer-learning. They are trained through episodic training using training datasets on developed our Python-based simulator. The results show both algorithms can be trained with less data and immediately respond to new signal types. Also, the performances of two algorithms are compared to determine which algorithm is more suitable for classifying jamming signals.

Performance analysis and comparison of various machine learning algorithms for early stroke prediction

  • Vinay Padimi;Venkata Sravan Telu;Devarani Devi Ningombam
    • ETRI Journal
    • /
    • 제45권6호
    • /
    • pp.1007-1021
    • /
    • 2023
  • Stroke is the leading cause of permanent disability in adults, and it can cause permanent brain damage. According to the World Health Organization, 795 000 Americans experience a new or recurrent stroke each year. Early detection of medical disorders, for example, strokes, can minimize the disabling effects. Thus, in this paper, we consider various risk factors that contribute to the occurrence of stoke and machine learning algorithms, for example, the decision tree, random forest, and naive Bayes algorithms, on patient characteristics survey data to achieve high prediction accuracy. We also consider the semisupervised self-training technique to predict the risk of stroke. We then consider the near-miss undersampling technique, which can select only instances in larger classes with the smaller class instances. Experimental results demonstrate that the proposed method obtains an accuracy of approximately 98.83% at low cost, which is significantly higher and more reliable compared with the compared techniques.

Improving Chest X-ray Image Classification via Integration of Self-Supervised Learning and Machine Learning Algorithms

  • Tri-Thuc Vo;Thanh-Nghi Do
    • Journal of information and communication convergence engineering
    • /
    • 제22권2호
    • /
    • pp.165-171
    • /
    • 2024
  • In this study, we present a novel approach for enhancing chest X-ray image classification (normal, Covid-19, edema, mass nodules, and pneumothorax) by combining contrastive learning and machine learning algorithms. A vast amount of unlabeled data was leveraged to learn representations so that data efficiency is improved as a means of addressing the limited availability of labeled data in X-ray images. Our approach involves training classification algorithms using the extracted features from a linear fine-tuned Momentum Contrast (MoCo) model. The MoCo architecture with a Resnet34, Resnet50, or Resnet101 backbone is trained to learn features from unlabeled data. Instead of only fine-tuning the linear classifier layer on the MoCopretrained model, we propose training nonlinear classifiers as substitutes for softmax in deep networks. The empirical results show that while the linear fine-tuned ImageNet-pretrained models achieved the highest accuracy of only 82.9% and the linear fine-tuned MoCo-pretrained models an increased highest accuracy of 84.8%, our proposed method offered a significant improvement and achieved the highest accuracy of 87.9%.

딥러닝 기반의 투명 렌즈 이상 탐지 알고리즘 성능 비교 및 적용 (Comparison and Application of Deep Learning-Based Anomaly Detection Algorithms for Transparent Lens Defects)

  • 김한비;서대호
    • 산업경영시스템학회지
    • /
    • 제47권1호
    • /
    • pp.9-19
    • /
    • 2024
  • Deep learning-based computer vision anomaly detection algorithms are widely utilized in various fields. Especially in the manufacturing industry, the difficulty in collecting abnormal data compared to normal data, and the challenge of defining all potential abnormalities in advance, have led to an increasing demand for unsupervised learning methods that rely on normal data. In this study, we conducted a comparative analysis of deep learning-based unsupervised learning algorithms that define and detect abnormalities that can occur when transparent contact lenses are immersed in liquid solution. We validated and applied the unsupervised learning algorithms used in this study to the existing anomaly detection benchmark dataset, MvTecAD. The existing anomaly detection benchmark dataset primarily consists of solid objects, whereas in our study, we compared unsupervised learning-based algorithms in experiments judging the shape and presence of lenses submerged in liquid. Among the algorithms analyzed, EfficientAD showed an AUROC and F1-score of 0.97 in image-level tests. However, the F1-score decreased to 0.18 in pixel-level tests, making it challenging to determine the locations where abnormalities occurred. Despite this, EfficientAD demonstrated excellent performance in image-level tests classifying normal and abnormal instances, suggesting that with the collection and training of large-scale data in real industrial settings, it is expected to exhibit even better performance.