• 제목/요약/키워드: Learning data set

검색결과 1,101건 처리시간 0.023초

Atypical Character Recognition Based on Mask R-CNN for Hangul Signboard

  • Lim, Sooyeon
    • International journal of advanced smart convergence
    • /
    • 제8권3호
    • /
    • pp.131-137
    • /
    • 2019
  • This study proposes a method of learning and recognizing the characteristics that are the classification criteria of Hangul using Mask R-CNN, one of the deep learning techniques, to recognize and classify atypical Hangul characters. The atypical characters on the Hangul signboard have a lot of deformed and colorful shapes beyond the general characters. Therefore, in order to recognize the Hangul signboard character, it is necessary to learn a separate atypical Hangul character rather than the existing formulaic one. We selected the Hangul character '닭' as sample data and constructed 5,383 Hangul image data sets and used them for learning and verifying the deep learning model. The accuracy of the results of analyzing the performance of the learning model using the test set constructed to verify the reliability of the learning model was about 92.65% (the area detection rate). Therefore we confirmed that the proposed method is very useful for Hangul signboard character recognition, and we plan to extend it to various Hangul data.

필기숫자 데이터에 대한 텐서플로우와 사이킷런의 인공지능 지도학습 방식의 성능비교 분석 (Performance Comparison Analysis of AI Supervised Learning Methods of Tensorflow and Scikit-Learn in the Writing Digit Data)

  • 조준모
    • 한국전자통신학회논문지
    • /
    • 제14권4호
    • /
    • pp.701-706
    • /
    • 2019
  • 최근에는 인공지능의 도래로 인하여 수많은 산업과 일반적인 응용에 적용됨으로써 우리의 생활에 큰 영향을 발휘하고 있다. 이러한 분야에 다양한 기계학습의 방식들이 제공되고 있다. 기계학습의 한 종류인 지도학습은 학습의 과정 중에 특징값과 목표값을 입력으로 가진다. 지도학습에도 다양한 종류가 있으며 이들의 성능은 입력데이터인 빅데이터의 특성과 상태에 좌우된다. 따라서, 본 논문에서는 특정한 빅 데이터 세트에 대한 다수의 지도학습 방식들의 성능을 비교하기 위해 텐서플로우(Tensorflow)와 사이킷런(Scikit-Learn)에서 제공하는 대표적인 지도학습의 방식들을 이용하여 파이썬언어와 주피터 노트북 환경에서 시뮬레이션하고 분석하였다.

Training for Huge Data set with On Line Pruning Regression by LS-SVM

  • Kim, Dae-Hak;Shim, Joo-Yong;Oh, Kwang-Sik
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2003년도 추계 학술발표회 논문집
    • /
    • pp.137-141
    • /
    • 2003
  • LS-SVM(least squares support vector machine) is a widely applicable and useful machine learning technique for classification and regression analysis. LS-SVM can be a good substitute for statistical method but computational difficulties are still remained to operate the inversion of matrix of huge data set. In modern information society, we can easily get huge data sets by on line or batch mode. For these kind of huge data sets, we suggest an on line pruning regression method by LS-SVM. With relatively small number of pruned support vectors, we can have almost same performance as regression with full data set.

  • PDF

An improvement of LEM2 algorithm

  • The, Anh-Pham;Lee, Young-Koo;Lee, Sung-Young
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.302-304
    • /
    • 2011
  • Rule based machine learning techniques are very important in our real world now. We can list out some important application which we can apply rule based machine learning algorithm such as medical data mining, business transaction mining. The different between rules based machine learning and model based machine learning is that model based machine learning out put some models, which often are very difficult to understand by expert or human. But rule based techniques output are the rule sets which is in IF THEN format. For example IF blood pressure=90 and kidney problem=yes then take this drug. By this way, medical doctor can easy modify and update some usable rule. This is the scenario in medical decision support system. Currently, Rough set is one of the most famous theory which can be used for produce the rule. LEM2 is the algorithm use this theory and can produce the small set of rule on the database. In this paper, we present an improvement of LEM2 algorithm which incorporates the variable precision techniques.

Modeling the Properties of the PECVD Silicon Dioxide Films Using Polynomial Neural Networks

  • Han, Seung-Soo;Song, Kyung-Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1998년도 제13차 학술회의논문집
    • /
    • pp.195-200
    • /
    • 1998
  • Since the neural network was introduced, significant progress has been made on data handling and learning algorithms. Currently, the most popular learning algorithm in neural network training is feed forward error back-propagation (FFEBP) algorithm. Aside from the success of the FFEBP algorithm, polynomial neural networks (PNN) learning has been proposed as a new learning method. The PNN learning is a self-organizing process designed to determine an appropriate set of Ivakhnenko polynomials that allow the activation of many neurons to achieve a desired state of activation that mimics a given set of sampled patterns. These neurons are interconnected in such a way that the knowledge is stored in Ivakhnenko coefficients. In this paper, the PNN model has been developed using the plasma enhanced chemical vapor deposition (PECVD) experimental data. To characterize the PECVD process using PNN, SiO$_2$films deposited under varying conditions were analyzed using fractional factorial experimental design with three center points. Parameters varied in these experiments included substrate temperature, pressure, RF power, silane flow rate and nitrous oxide flow rate. Approximately five microns of SiO$_2$were deposited on (100) silicon wafers in a Plasma-Therm 700 series PECVD system at 13.56 MHz.

  • PDF

Design of Ballistic Calculation Model for Improving Accuracy of Naval Gun Firing based on Deep Learning

  • Oh, Moon-Tak
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권12호
    • /
    • pp.11-18
    • /
    • 2021
  • 본 논문에서는 함포 사격 정확도를 향상시키기 위해 표적 위치 예측과 사격 오차 도출에서의 딥러닝 알고리즘 적용 가능성을 연구하였다. 표적 위치 예측 시 딥러닝 알고리즘의 하나인 LSTM 모델과 RN 구조를 적용했을 때 좀 더 정밀한 표적 위치를 예측할 수 있다는 가능성을 확인하고 모델을 설계하였다. 사격 오차 도출 시 사격제원 계산에 영향을 끼치는 요소들을 데이터 셋으로 관리하며, GAN을 사용하여 데이터 셋을 생성 후 강화 학습을 진행하여 사격 오차를 줄일 수 있는 모델을 설계하였다. 2가지 모델을 결합하여 사격 정확도를 향상시키기 위한 딥러닝 기반의 사격제원 계산 모델을 설계하였다.

머신러닝을 이용한 국내 수입 자동차 구매 해약 예측 모델 연구: H 수입차 딜러사 대상으로 (A Study on the Prediction Model for Imported Vehicle Purchase Cancellation Using Machine Learning: Case of H Imported Vehicle Dealers)

  • 정동균;이종화;이현규
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제30권2호
    • /
    • pp.105-126
    • /
    • 2021
  • Purpose The purpose of this study is to implement a optimal machine learning model about the cancellation prediction performance in car sales business. It is to apply the data set of accumulated contract, cancellation, and sales information in sales support system(SFA) which is commonly used for sales, customers and inventory management by imported car dealers, to several machine learning models and predict performance of cancellation. Design/methodology/approach This study extracts 29,073 contracts, cancellations, and sales data from 2015 to 2020 accumulated in the sales support system(SFA) for imported car dealers and uses the analysis program Python Jupiter notebook in order to perform data pre-processing, verification, and modeling that is applying and learning to Machine learning model after then the final result was predicted using new data. Findings This study confirmed that cancellation prediction is possible by applying car purchase contract information to machine learning models. It proved the possibility of developing and utilizing a generalized predictive model by using data of imported car sales system with machine learning technology. It can reduce and prevent the sales failure as caring the potential lost customer intensively and it lead to increase sales revenue by predicting the cancellation possibility of individual customers.

Developing a National Data Metrics Framework for Learning Analytics in Korea

  • RHA, Ilju;LIM, Cheolil;CHO, Young Hoan;CHOI, Hyoseon;YUN, Haeseon;YOO, Mina;Jeong Eui-Suk
    • Educational Technology International
    • /
    • 제18권1호
    • /
    • pp.1-25
    • /
    • 2017
  • Educational applications of big data analysis have been of interest in order to improve learning effectiveness and efficiency. As a basic challenge for educational applications, the purpose of this study is to develop a comprehensive data set scheme for learning analytics in the context of digital textbook usage within the K-12 school environments of Korea. On the basis of the literature review, the Start-up Mega Planning model of needs assessment methodology was used as this study sought to come up with negotiated solutions for different stakeholders for a national level of learning metrics framework. The Ministry of Education (MOE), Seoul Metropolitan Office of Education (SMOE), and Korean Education and Research Information Service (KERIS) were involved in the discussion of the learning metrics framework scope. Finally, we suggest a proposal for the national learning metrics framework to reflect such considerations as dynamic education context and feasibility of the metrics into the K-12 Korean schools. The possibilities and limitations of the suggested framework for learning metrics are discussed and future areas of study are suggested.

Weighted Fast Adaptation Prior on Meta-Learning

  • Widhianingsih, Tintrim Dwi Ary;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • 제8권4호
    • /
    • pp.68-74
    • /
    • 2019
  • Along with the deeper architecture in the deep learning approaches, the need for the data becomes very big. In the real problem, to get huge data in some disciplines is very costly. Therefore, learning on limited data in the recent years turns to be a very appealing area. Meta-learning offers a new perspective to learn a model with this limitation. A state-of-the-art model that is made using a meta-learning framework, Meta-SGD, is proposed with a key idea of learning a hyperparameter or a learning rate of the fast adaptation stage in the outer update. However, this learning rate usually is set to be very small. In consequence, the objective function of SGD will give a little improvement to our weight parameters. In other words, the prior is being a key value of getting a good adaptation. As a goal of meta-learning approaches, learning using a single gradient step in the inner update may lead to a bad performance. Especially if the prior that we use is far from the expected one, or it works in the opposite way that it is very effective to adapt the model. By this reason, we propose to add a weight term to decrease, or increase in some conditions, the effect of this prior. The experiment on few-shot learning shows that emphasizing or weakening the prior can give better performance than using its original value.

신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 - (A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction)

  • 이영찬;곽수환
    • 지능정보연구
    • /
    • 제5권1호
    • /
    • pp.95-101
    • /
    • 1999
  • In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

  • PDF