• Title/Summary/Keyword: Supervised learning methods

Search Result 209, Processing Time 0.03 seconds

Characteristics on Inconsistency Pattern Modeling as Hybrid Data Mining Techniques (혼합 데이터 마이닝 기법인 불일치 패턴 모델의 특성 연구)

  • Hur, Joon;Kim, Jong-Woo
    • Journal of Information Technology Applications and Management
    • /
    • v.15 no.1
    • /
    • pp.225-242
    • /
    • 2008
  • PM (Inconsistency Pattern Modeling) is a hybrid supervised learning technique using the inconsistence pattern of input variables in mining data sets. The IPM tries to improve prediction accuracy by combining more than two different supervised learning methods. The previous related studies have shown that the IPM was superior to the single usage of an existing supervised learning methods such as neural networks, decision tree induction, logistic regression and so on, and it was also superior to the existing combined model methods such as Bagging, Boosting, and Stacking. The objectives of this paper is explore the characteristics of the IPM. To understand characteristics of the IPM, three experiments were performed. In these experiments, there are high performance improvements when the prediction inconsistency ratio between two different supervised learning techniques is high and the distance among supervised learning methods on MDS (Multi-Dimensional Scaling) map is long.

  • PDF

Performance Comparison Analysis of AI Supervised Learning Methods of Tensorflow and Scikit-Learn in the Writing Digit Data (필기숫자 데이터에 대한 텐서플로우와 사이킷런의 인공지능 지도학습 방식의 성능비교 분석)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.701-706
    • /
    • 2019
  • The advent of the AI(: Artificial Intelligence) has applied to many industrial and general applications have havingact on our lives these days. Various types of machine learning methods are supported in this field. The supervised learning method of the machine learning has features and targets as an input in the learning process. There are many supervised learning methods as well and their performance varies depends on the characteristics and states of the big data type as an input data. Therefore, in this paper, in order to compare the performance of the various supervised learning method with a specific big data set, the supervised learning methods supported in the Tensorflow and the Sckit-Learn are simulated and analyzed in the Jupyter Notebook environment with python.

Deep Learning Based Monocular Depth Estimation: Survey

  • Lee, Chungkeun;Shim, Dongseok;Kim, H. Jin
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.4
    • /
    • pp.297-305
    • /
    • 2021
  • Monocular depth estimation helps the robot to understand the surrounding environments in 3D. Especially, deep-learning-based monocular depth estimation has been widely researched, because it may overcome the scale ambiguity problem, which is a main issue in classical methods. Those learning based methods can be mainly divided into three parts: supervised learning, unsupervised learning, and semi-supervised learning. Supervised learning trains the network from dense ground-truth depth information, unsupervised one trains it from images sequences and semi-supervised one trains it from stereo images and sparse ground-truth depth. We describe the basics of each method, and then explain the recent research efforts to enhance the depth estimation performance.

An Analysis of the methods to alleviate the cost of data labeling in Deep learning (딥 러닝에서 Labeling 부담을 줄이기 위한 연구분석)

  • Han, Seokmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.545-550
    • /
    • 2022
  • In Deep Learning method, it is well known that it requires large amount of data to train the deep neural network. And it also requires the labeling of each data to fully train the neural network, which means that experts should spend lots of time to provide the labeling. To alleviate the problem of time-consuming labeling process, some methods have been suggested such as weak-supervised method, one-shot learning, self-supervised, suggestive learning, and so on. In this manuscript, those methods are analyzed and its possible future direction of the research is suggested.

Sentiment Orientation Using Deep Learning Sequential and Bidirectional Models

  • Alyamani, Hasan J.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.23-30
    • /
    • 2021
  • Sentiment Analysis has become very important field of research because posting of reviews is becoming a trend. Supervised, unsupervised and semi supervised machine learning methods done lot of work to mine this data. Feature engineering is complex and technical part of machine learning. Deep learning is a new trend, where this laborious work can be done automatically. Many researchers have done many works on Deep learning Convolutional Neural Network (CNN) and Long Shor Term Memory (LSTM) Neural Network. These requires high processing speed and memory. Here author suggested two models simple & bidirectional deep leaning, which can work on text data with normal processing speed. At end both models are compared and found bidirectional model is best, because simple model achieve 50% accuracy and bidirectional deep learning model achieve 99% accuracy on trained data while 78% accuracy on test data. But this is based on 10-epochs and 40-batch size. This accuracy can also be increased by making different attempts on epochs and batch size.

Dam Sensor Outlier Detection using Mixed Prediction Model and Supervised Learning

  • Park, Chang-Mok
    • International journal of advanced smart convergence
    • /
    • v.7 no.1
    • /
    • pp.24-32
    • /
    • 2018
  • An outlier detection method using mixed prediction model has been described in this paper. The mixed prediction model consists of time-series model and regression model. The parameter estimation of the prediction model was performed using supervised learning and a genetic algorithm is adopted for a learning method. The experiments were performed in artificial and real data set. The prediction performance is compared with the existing prediction methods using artificial data. Outlier detection is conducted using the real sensor measurements in a dam. The validity of the proposed method was shown in the experiments.

A Study on Identification of Track Irregularity of High Speed Railway Track Using an SVM (SVM을 이용한 고속철도 궤도틀림 식별에 관한 연구)

  • Kim, Ki-Dong;Hwang, Soon-Hyun
    • Journal of Industrial Technology
    • /
    • v.33 no.A
    • /
    • pp.31-39
    • /
    • 2013
  • There are two methods to make a distinction of deterioration of high-speed railway track. One is that an administrator checks for each attribute value of track induction data represented in graph and determines whether maintenance is needed or not. The other is that an administrator checks for monthly trend of attribute value of the corresponding section and determines whether maintenance is needed or not. But these methods have a weak point that it takes longer times to make decisions as the amount of track induction data increases. As a field of artificial intelligence, the method that a computer makes a distinction of deterioration of high-speed railway track automatically is based on machine learning. Types of machine learning algorism are classified into four type: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. This research uses supervised learning that analogizes a separating function form training data. The method suggested in this research uses SVM classifier which is a main type of supervised learning and shows higher efficiency binary classification problem. and it grasps the difference between two groups of data and makes a distinction of deterioration of high-speed railway track.

  • PDF

The use of support vector machines in semi-supervised classification

  • Bae, Hyunjoo;Kim, Hyungwoo;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.193-202
    • /
    • 2022
  • Semi-supervised learning has gained significant attention in recent applications. In this article, we provide a selective overview of popular semi-supervised methods and then propose a simple but effective algorithm for semi-supervised classification using support vector machines (SVM), one of the most popular binary classifiers in a machine learning community. The idea is simple as follows. First, we apply the dimension reduction to the unlabeled observations and cluster them to assign labels on the reduced space. SVM is then employed to the combined set of labeled and unlabeled observations to construct a classification rule. The use of SVM enables us to extend it to the nonlinear counterpart via kernel trick. Our numerical experiments under various scenarios demonstrate that the proposed method is promising in semi-supervised classification.

Smoothing parameter selection in semi-supervised learning (준지도 학습의 모수 선택에 관한 연구)

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.993-1000
    • /
    • 2016
  • Semi-supervised learning makes it easy to use an unlabeled data in the supervised learning such as classification. Applying the semi-supervised learning on the regression analysis, we propose two methods for a better regression function estimation. The proposed methods have been assumed different marginal densities of independent variables and different smoothing parameters in unlabeled and labeled data. We shows that the overfitted pilot estimator should be used to achieve the fastest convergence rate and unlabeled data may help to improve the convergence rate with well estimated smoothing parameters. We also find the conditions of smoothing parameters to achieve optimal convergence rate.

Software Fault Prediction using Semi-supervised Learning Methods (세미감독형 학습 기법을 사용한 소프트웨어 결함 예측)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.127-133
    • /
    • 2019
  • Most studies of software fault prediction have been about supervised learning models that use only labeled training data. Although supervised learning usually shows high prediction performance, most development groups do not have sufficient labeled data. Unsupervised learning models that use only unlabeled data for training are difficult to build and show poor performance. Semi-supervised learning models that use both labeled data and unlabeled data can solve these problems. Self-training technique requires the fewest assumptions and constraints among semi-supervised techniques. In this paper, we implemented several models using self-training algorithms and evaluated them using Accuracy and AUC. As a result, YATSI showed the best performance.