• Title/Summary/Keyword: Self- Supervised Learning

Search Result 98, Processing Time 0.034 seconds

Semi-supervised Model for Fault Prediction using Tree Methods (트리 기법을 사용하는 세미감독형 결함 예측 모델)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.4
    • /
    • pp.107-113
    • /
    • 2020
  • A number of studies have been conducted on predicting software faults, but most of them have been supervised models using labeled data as training data. Very few studies have been conducted on unsupervised models using only unlabeled data or semi-supervised models using enough unlabeled data and few labeled data. In this paper, we produced new semi-supervised models using tree algorithms in the self-training technique. As a result of the model performance evaluation experiment, the newly created tree models performed better than the existing models, and CollectiveWoods, in particular, outperformed other models. In addition, it showed very stable performance even in the case with very few labeled data.

Self-Supervised Long-Short Term Memory Network for Solving Complex Job Shop Scheduling Problem

  • Shao, Xiaorui;Kim, Chang Soo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2993-3010
    • /
    • 2021
  • The job shop scheduling problem (JSSP) plays a critical role in smart manufacturing, an effective JSSP scheduler could save time cost and increase productivity. Conventional methods are very time-consumption and cannot deal with complicated JSSP instances as it uses one optimal algorithm to solve JSSP. This paper proposes an effective scheduler based on deep learning technology named self-supervised long-short term memory (SS-LSTM) to handle complex JSSP accurately. First, using the optimal method to generate sufficient training samples in small-scale JSSP. SS-LSTM is then applied to extract rich feature representations from generated training samples and decide the next action. In the proposed SS-LSTM, two channels are employed to reflect the full production statues. Specifically, the detailed-level channel records 18 detailed product information while the system-level channel reflects the type of whole system states identified by the k-means algorithm. Moreover, adopting a self-supervised mechanism with LSTM autoencoder to keep high feature extraction capacity simultaneously ensuring the reliable feature representative ability. The authors implemented, trained, and compared the proposed method with the other leading learning-based methods on some complicated JSSP instances. The experimental results have confirmed the effectiveness and priority of the proposed method for solving complex JSSP instances in terms of make-span.

ART1-based Fuzzy Supervised Learning Algorithm (ART-1 기반 퍼지 지도 학습 알고리즘)

  • Kim Kwang-Baek;Cho Jae-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.883-889
    • /
    • 2005
  • Error backpropagation algorithm of multilayer perceptron may result in local-minima because of the insufficient nodes in the hidden layer, inadequate momentum set-up, and initial weights. In this paper, we proposed the ART-1 based fuzzy supervised learning algorithm which is composed of ART-1 and fuzzy single layer supervised learning algorithm. The Proposed fuzzy supervised learning algorithm using self-generation method applied not only ART-1 to creation of nodes from the input layer to the hidden layer, but also the winer-take-all method, modifying stored patterns according to specific patterns. to adjustment of weights. We have applied the proposed learning method to the problem of recognizing a resident registration number in resident cards. Our experimental result showed that the possibility of local-minima was decreased and the teaming speed and the paralysis were improved more than the conventional error backpropagation algorithm.

Improving Chest X-ray Image Classification via Integration of Self-Supervised Learning and Machine Learning Algorithms

  • Tri-Thuc Vo;Thanh-Nghi Do
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.2
    • /
    • pp.165-171
    • /
    • 2024
  • In this study, we present a novel approach for enhancing chest X-ray image classification (normal, Covid-19, edema, mass nodules, and pneumothorax) by combining contrastive learning and machine learning algorithms. A vast amount of unlabeled data was leveraged to learn representations so that data efficiency is improved as a means of addressing the limited availability of labeled data in X-ray images. Our approach involves training classification algorithms using the extracted features from a linear fine-tuned Momentum Contrast (MoCo) model. The MoCo architecture with a Resnet34, Resnet50, or Resnet101 backbone is trained to learn features from unlabeled data. Instead of only fine-tuning the linear classifier layer on the MoCopretrained model, we propose training nonlinear classifiers as substitutes for softmax in deep networks. The empirical results show that while the linear fine-tuned ImageNet-pretrained models achieved the highest accuracy of only 82.9% and the linear fine-tuned MoCo-pretrained models an increased highest accuracy of 84.8%, our proposed method offered a significant improvement and achieved the highest accuracy of 87.9%.

Self-Supervised Document Representation Method

  • Yun, Yeoil;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.187-197
    • /
    • 2020
  • Recently, various methods of text embedding using deep learning algorithms have been proposed. Especially, the way of using pre-trained language model which uses tremendous amount of text data in training is mainly applied for embedding new text data. However, traditional pre-trained language model has some limitations that it is hard to understand unique context of new text data when the text has too many tokens. In this paper, we propose self-supervised learning-based fine tuning method for pre-trained language model to infer vectors of long-text. Also, we applied our method to news articles and classified them into categories and compared classification accuracy with traditional models. As a result, it was confirmed that the vector generated by the proposed model more accurately expresses the inherent characteristics of the document than the vectors generated by the traditional models.

The Evaluation of Denoising PET Image Using Self Supervised Noise2Void Learning Training: A Phantom Study (자기 지도 학습훈련 기반의 Noise2Void 네트워크를 이용한 PET 영상의 잡음 제거 평가: 팬텀 실험)

  • Yoon, Seokhwan;Park, Chanrok
    • Journal of radiological science and technology
    • /
    • v.44 no.6
    • /
    • pp.655-661
    • /
    • 2021
  • Positron emission tomography (PET) images is affected by acquisition time, short acquisition times results in low gamma counts leading to degradation of image quality by statistical noise. Noise2Void(N2V) is self supervised denoising model that is convolutional neural network (CNN) based deep learning. The purpose of this study is to evaluate denoising performance of N2V for PET image with a short acquisition time. The phantom was scanned as a list mode for 10 min using Biograph mCT40 of PET/CT (Siemens Healthcare, Erlangen, Germany). We compared PET images using NEMA image-quality phantom for standard acquisition time (10 min), short acquisition time (2min) and simulated PET image (S2 min). To evaluate performance of N2V, the peak signal to noise ratio (PSNR), normalized root mean square error (NRMSE), structural similarity index (SSIM) and radio-activity recovery coefficient (RC) were used. The PSNR, NRMSE and SSIM for 2 min and S2 min PET images compared to 10min PET image were 30.983, 33.936, 9.954, 7.609 and 0.916, 0.934 respectively. The RC for spheres with S2 min PET image also met European Association of Nuclear Medicine Research Ltd. (EARL) FDG PET accreditation program. We confirmed generated S2 min PET image from N2V deep learning showed improvement results compared to 2 min PET image and The PET images on visual analysis were also comparable between 10 min and S2 min PET images. In conclusion, noisy PET image by means of short acquisition time using N2V denoising network model can be improved image quality without underestimation of radioactivity.

Recognition of the Passport by Using Self-Generating Supervised Learning Algorithm (자가 생성 지도 학습 알고리즘을 이용한 여권 인식)

  • Kim, Kyoung-Hwa;Jung, Sung-Ye;Nam, Mi-Young;Kim, Kwang-Baek
    • Annual Conference of KIPS
    • /
    • 2001.10a
    • /
    • pp.567-570
    • /
    • 2001
  • 현재의 출입국 관리자는 여권을 제시하면 여권을 육안으로 검색하고 수작업으로 정보를 입력하여 여권 데이터베이스와 대비하는 것이다. 본 논문에서는 이러한 문제점을 해결하기 위하여 자동으로 여권을 인식할 수 있는 방법을 제안한다. 여권에는 사용자에 대한 많은 정보들이 있는데 여권 영상에서 코드 정보 영역을 히스토그램 방식과 소벨 연산자를 이용하여 코드 영역 및 개별 코드 문자를 추출하고 새로운 자가 생성 지도학습 알고리즘(Self-Generating Supervised Learning Algorithm)을 제안하여 여권 인식에 적응하였다. 10개의 여권 영상을 실험한 결과 모든 코드의 문자 영역이 추출되었고 인식되었다.

  • PDF

Face Super Resolution using Self-Supervised Learning (자기 지도 학습을 통한 고해상도 얼굴 영상 복원)

  • Jo, Byung-Ho;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.724-726
    • /
    • 2020
  • 본 논문에서는 GAN 과 자기 지도 학습(self-supervised learning)을 통해 입력 얼굴 영상의 공간 해상도를 4 배 증가시키는 기법을 제안한다. 제안하는 기법은 변형된 StarGAN v2 구조의 생성자와 구분자를 사용하여 저해상도의 입력 영상만을 가지고 학습 과정을 거쳐 고해상도 영상을 복원하도록 자기 지도 학습을 수행한다. 제안하는 기법은 복원된 영상과 고해상도 영상 간의 손실을 줄이는 지도 학습이 가지고 있는 단점을 극복하고 입력 영상만을 가지고 영상 내부에 존재하는 특징을 학습하여 얼굴 영상에 대한 고해상도 영상을 복원한다. 제안하는 기법과 Bicubic 보간법과의 비교를 통해 우수성을 검증한다.

  • PDF

Self-supervised Learning Method using Heterogeneous Mass Corpus for Sentence Embedding Model (이종의 말뭉치를 활용한 자기 지도 문장 임베딩 학습 방법)

  • Kim, Sung-Ju;Suh, Soo-Bin;Park, Jin-Seong;Park, Sung-Hyun;Jeon, Dong-Hyeon;Kim, Seon-Hoon;Kim, Kyung-Duk;Kang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.32-36
    • /
    • 2020
  • 문장의 의미를 잘 임베딩하는 문장 인코더를 만들기 위해 비지도 학습과 지도 학습 기반의 여러 방법이 연구되고 있다. 지도 학습 방식은 충분한 양의 정답을 구축하는데 어려움이 있다는 한계가 있다. 반면 지금까지의 비지도 학습은 단일 형식의 말뭉치에 한정해서 입력된 현재 문장의 다음 문장을 생성 또는 예측하는 형식으로 문제를 정의하였다. 본 논문에서는 위키피디아, 뉴스, 지식 백과 등 문서 형태의 말뭉치에 더해 지식인이나 검색 클릭 로그와 같은 구성이 다양한 이종의 대량 말뭉치를 활용하는 자기 지도 학습 방법을 제안한다. 각 형태의 말뭉치에 적합한 자기 지도 학습 문제를 설계하고 학습한 경우 KorSTS 데이셋의 비지도 모델 성능 평가에서 기준 모델 대비 7점 가량의 성능 향상이 있었다.

  • PDF

Self-Supervised Spatiotemporal Learning For Video Using Variable Rotate Angle And Speed Prediction (비디오에서의 다양한 회전 각도와 회전 속도를 사용한 시 공간 자기 지도학습)

  • Kim, Taehoon;Hwang, Wonjun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.732-735
    • /
    • 2020
  • 기존에 지도학습 방법은 성능은 좋지만, 학습할 때 비디오 데이터와 정답 라벨이 있어야 한다. 그러나 이러한 데이터의 라벨을 수동으로 붙여줘야 하는 문제점과 그에 필요한 시간과 돈이 크다는 것이다. 이러한 문제점을 해결하기 위한 다양한 방법 중 자기지도학습(Self-Supervised Learning) 중 하나인 회전 방법을 비디오 데이터에 적용하여 학습하는 연구를 진행하였다. 본 연구에서는 두가지 방법을 제안한다. 먼저 기존의 비디오 데이터를 입력으로 받으면 단순히 비디오 자체를 회전시키는 것이 아닌 입력으로 들어온 비디오의 각각 프레임이 시간이 지나면서 일정한 속도로 회전을 시킨다. 이때의 회전은 총 네 가지 각도[0, 90, 180, 270]를 분류하도록 하는 방법론이다. 두 번째로 비디오의 프레임이 시간이 지나면서 변할 때 프레임 별로 고정된 각도로 회전시키는데 이때 회전하는 속도 네 가지 [1x, 0.5x, 0.25x, 0.125]를 분류하도록 하는 방법론이다. 이와 같은 제안하는 pretext task들을 통해 네트워크를 학습한 뒤, 학습된 모델을 fine tune 시켜 비디오 분류에 대한 실험을 수행 및 결과를 도출하였다.

  • PDF