• Title/Summary/Keyword: supervised training

Search Result 313, Processing Time 0.025 seconds

Development of Semi-Supervised Deep Domain Adaptation Based Face Recognition Using Only a Single Training Sample (단일 훈련 샘플만을 활용하는 준-지도학습 심층 도메인 적응 기반 얼굴인식 기술 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1375-1385
    • /
    • 2022
  • In this paper, we propose a semi-supervised domain adaptation solution to deal with practical face recognition (FR) scenarios where a single face image for each target identity (to be recognized) is only available in the training phase. Main goal of the proposed method is to reduce the discrepancy between the target and the source domain face images, which ultimately improves FR performances. The proposed method is based on the Domain Adatation network (DAN) using an MMD loss function to reduce the discrepancy between domains. In order to train more effectively, we develop a novel loss function learning strategy in which MMD loss and cross-entropy loss functions are adopted by using different weights according to the progress of each epoch during the learning. The proposed weight adoptation focuses on the training of the source domain in the initial learning phase to learn facial feature information such as eyes, nose, and mouth. After the initial learning is completed, the resulting feature information is used to training a deep network using the target domain images. To evaluate the effectiveness of the proposed method, FR performances were evaluated with pretrained model trained only with CASIA-webface (source images) and fine-tuned model trained only with FERET's gallery (target images) under the same FR scenarios. The experimental results showed that the proposed semi-supervised domain adaptation can be improved by 24.78% compared to the pre-trained model and 28.42% compared to the fine-tuned model. In addition, the proposed method outperformed other state-of-the-arts domain adaptation approaches by 9.41%.

An Efficient Detection Method for Rail Surface Defect using Limited Label Data (한정된 레이블 데이터를 이용한 효율적인 철도 표면 결함 감지 방법)

  • Seokmin Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.83-88
    • /
    • 2024
  • In this research, we propose a Semi-Supervised learning based railroad surface defect detection method. The Resnet50 model, pretrained on ImageNet, was employed for the training. Data without labels are randomly selected, and then labeled to train the ResNet50 model. The trained model is used to predict the results of the remaining unlabeled training data. The predicted values exceeding a certain threshold are selected, sorted in descending order, and added to the training data. Pseudo-labeling is performed based on the class with the highest probability during this process. An experiment was conducted to assess the overall class classification performance based on the initial number of labeled data. The results showed an accuracy of 98% at best with less than 10% labeled training data compared to the overall training data.

Automatic Extraction of Training Dataset Using Expectation Maximization Algorithm - for Automatic Supervised Classification of Road Networks (기대최대화 알고리즘을 활용한 도로노면 training 자료 자동추출에 관한 연구 - 감독분류를 통한 도로 네트워크의 자동추출을 위하여)

  • Han, You-Kyung;Choi, Jae-Wan;Lee, Jae-Bin;Yu, Ki-Yun;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.289-297
    • /
    • 2009
  • In the paper, we propose the methodology to extract training dataset automatically for supervised classification of road networks. For the preprocessing, we co-register the airborne photos, LIDAR data and large-scale digital maps and then, create orthophotos and intensity images. By overlaying the large-scale digital maps onto generated images, we can extract the initial training dataset for the supervised classification of road networks. However, the initial training information is distorted because there are errors propagated from registration process and, also, there are generally various objects in the road networks such as asphalt, road marks, vegetation, cars and so on. As such, to generate the training information only for the road surface, we apply the Expectation Maximization technique and finally, extract the training dataset of the road surface. For the accuracy test, we compare the training dataset with manually extracted ones. Through the statistical tests, we can identify that the developed method is valid.

Supervised Classification Systems for High Resolution Satellite Images (고해상도 위성영상을 위한 감독분류 시스템)

  • 전영준;김진일
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.301-310
    • /
    • 2003
  • In this paper, we design and Implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the m()st effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Machine Learning Techniques for Speech Recognition using the Magnitude

  • Krishnan, C. Gopala;Robinson, Y. Harold;Chilamkurti, Naveen
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.33-40
    • /
    • 2020
  • Machine learning consists of supervised and unsupervised learning among which supervised learning is used for the speech recognition objectives. Supervised learning is the Data mining task of inferring a function from labeled training data. Speech recognition is the current trend that has gained focus over the decades. Most automation technologies use speech and speech recognition for various perspectives. This paper demonstrates an overview of major technological standpoint and gratitude of the elementary development of speech recognition and provides impression method has been developed in every stage of speech recognition using supervised learning. The project will use DNN to recognize speeches using magnitudes with large datasets.

A Study on Training Data Selection Method for EEG Emotion Analysis using Semi-supervised Learning Algorithm (준 지도학습 알고리즘을 이용한 뇌파 감정 분석을 위한 학습데이터 선택 방법에 관한 연구)

  • Yun, Jong-Seob;Kim, Jin Heon
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.816-821
    • /
    • 2018
  • Recently, machine learning algorithms based on artificial neural networks started to be used widely as classifiers in the field of EEG research for emotion analysis and disease diagnosis. When a machine learning model is used to classify EEG data, if training data is composed of only data having similar characteristics, classification performance may be deteriorated when applied to data of another group. In this paper, we propose a method to construct training data set by selecting several groups of data using semi-supervised learning algorithm to improve these problems. We then compared the performance of the two models by training the model with a training data set consisting of data with similar characteristics to the training data set constructed using the proposed method.

Named Entity Recognition Using Distant Supervision and Active Bagging (원거리 감독과 능동 배깅을 이용한 개체명 인식)

  • Lee, Seong-hee;Song, Yeong-kil;Kim, Hark-soo
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.269-274
    • /
    • 2016
  • Named entity recognition is a process which extracts named entities in sentences and determines categories of the named entities. Previous studies on named entity recognition have primarily been used for supervised learning. For supervised learning, a large training corpus manually annotated with named entity categories is needed, and it is a time-consuming and labor-intensive job to manually construct a large training corpus. We propose a semi-supervised learning method to minimize the cost needed for training corpus construction and to rapidly enhance the performance of named entity recognition. The proposed method uses distance supervision for the construction of the initial training corpus. It can then effectively remove noise sentences in the initial training corpus through the use of an active bagging method, an ensemble method of bagging and active learning. In the experiments, the proposed method improved the F1-score of named entity recognition from 67.36% to 76.42% after active bagging for 15 times.

A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation (일치성규칙과 목표값이 없는 데이터 증대를 이용하는 학습의 성능 향상 방법에 관한 연구)

  • Kim, Hyunwoong;Seok, Kyungha
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.167-175
    • /
    • 2021
  • Semi-supervised learning uses both labeled data and unlabeled data. Recently consistency regularization is very popular in semi-supervised learning. Unsupervised data augmentation (UDA) that uses unlabeled data augmentation is also based on the consistency regularization. The Kullback-Leibler divergence is used for the loss of unlabeled data and cross-entropy for the loss of labeled data through UDA learning. UDA uses techniques such as training signal annealing (TSA) and confidence-based masking to promote performance. In this study, we propose to use Jensen-Shannon divergence instead of Kullback-Leibler divergence, reverse-TSA and not to use confidence-based masking for performance improvement. Through experiment, we show that the proposed technique yields better performance than those of UDA.

Development of a Steel Plate Surface Defect Detection System Based on Small Data Deep Learning (소량 데이터 딥러닝 기반 강판 표면 결함 검출 시스템 개발)

  • Gaybulayev, Abdulaziz;Lee, Na-Hyeon;Lee, Ki-Hwan;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.3
    • /
    • pp.129-138
    • /
    • 2022
  • Collecting and labeling sufficient training data, which is essential to deep learning-based visual inspection, is difficult for manufacturers to perform because it is very expensive. This paper presents a steel plate surface defect detection system with industrial-grade detection performance by training a small amount of steel plate surface images consisting of labeled and non-labeled data. To overcome the problem of lack of training data, we propose two data augmentation techniques: program-based augmentation, which generates defect images in a geometric way, and generative model-based augmentation, which learns the distribution of labeled data. We also propose a 4-step semi-supervised learning using pseudo labels and consistency training with fixed-size augmentation in order to utilize unlabeled data for training. The proposed technique obtained about 99% defect detection performance for four defect types by using 100 real images including labeled and unlabeled data.

Supervised Learning-Based Collaborative Filtering Using Market Basket Data for the Cold-Start Problem

  • Hwang, Wook-Yeon;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.421-431
    • /
    • 2014
  • The market basket data in the form of a binary user-item matrix or a binary item-user matrix can be modelled as a binary classification problem. The binary logistic regression approach tackles the binary classification problem, where principal components are predictor variables. If users or items are sparse in the training data, the binary classification problem can be considered as a cold-start problem. The binary logistic regression approach may not function appropriately if the principal components are inefficient for the cold-start problem. Assuming that the market basket data can also be considered as a special regression problem whose response is either 0 or 1, we propose three supervised learning approaches: random forest regression, random forest classification, and elastic net to tackle the cold-start problem, comparing the performance in a variety of experimental settings. The experimental results show that the proposed supervised learning approaches outperform the conventional approaches.