• Title/Summary/Keyword: Learning Performance Comparison

Search Result 578, Processing Time 0.015 seconds

Comparison of Different Deep Learning Optimizers for Modeling Photovoltaic Power

  • Poudel, Prasis;Bae, Sang Hyun;Jang, Bongseog
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.204-208
    • /
    • 2018
  • Comparison of different optimizer performance in photovoltaic power modeling using artificial neural deep learning techniques is described in this paper. Six different deep learning optimizers are tested for Long-Short-Term Memory networks in this study. The optimizers are namely Adam, Stochastic Gradient Descent, Root Mean Square Propagation, Adaptive Gradient, and some variants such as Adamax and Nadam. For comparing the optimization techniques, high and low fluctuated photovoltaic power output are examined and the power output is real data obtained from the site at Mokpo university. Using Python Keras version, we have developed the prediction program for the performance evaluation of the optimizations. The prediction error results of each optimizer in both high and low power cases shows that the Adam has better performance compared to the other optimizers.

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning (전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.

Deep Learning-based Deraining: Performance Comparison and Trends (딥러닝 기반 Deraining 기법 비교 및 연구 동향)

  • Cho, Minji;Park, Ye-In;Cho, Yubin;Kang, Suk-Ju
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.225-232
    • /
    • 2021
  • Deraining is one of the image restoration tasks and should consider a tradeoff between local details and broad contextual information while recovering images. Current studies adopt an attention mechanism which has been actively researched in natural language processing to deal with both global and local features. This paper classifies existing deraining methods and provides comparative analysis and performance comparison by using several datasets in terms of generalization.

Comparative Study of Tokenizer Based on Learning for Sentiment Analysis (고객 감성 분석을 위한 학습 기반 토크나이저 비교 연구)

  • Kim, Wonjoon
    • Journal of Korean Society for Quality Management
    • /
    • v.48 no.3
    • /
    • pp.421-431
    • /
    • 2020
  • Purpose: The purpose of this study is to compare and analyze the tokenizer in natural language processing for customer satisfaction in sentiment analysis. Methods: In this study, a supervised learning-based tokenizer Mecab-Ko and an unsupervised learning-based tokenizer SentencePiece were used for comparison. Three algorithms: Naïve Bayes, k-Nearest Neighbor, and Decision Tree were selected to compare the performance of each tokenizer. For performance comparison, three metrics: accuracy, precision, and recall were used in the study. Results: The results of this study are as follows; Through performance evaluation and verification, it was confirmed that SentencePiece shows better classification performance than Mecab-Ko. In order to confirm the robustness of the derived results, independent t-tests were conducted on the evaluation results for the two types of the tokenizer. As a result of the study, it was confirmed that the classification performance of the SentencePiece tokenizer was high in the k-Nearest Neighbor and Decision Tree algorithms. In addition, the Decision Tree showed slightly higher accuracy among the three classification algorithms. Conclusion: The SentencePiece tokenizer can be used to classify and interpret customer sentiment based on online reviews in Korean more accurately. In addition, it seems that it is possible to give a specific meaning to a short word or a jargon, which is often used by users when evaluating products but is not defined in advance.

Performance Comparison of Deep Learning Model Loss Function for Scaffold Defect Detection (인공지지체 불량 검출을 위한 딥러닝 모델 손실 함수의 성능 비교)

  • Song Yeon Lee;Yong Jeong Huh
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.40-44
    • /
    • 2023
  • The defect detection based on deep learning requires minimal loss and high accuracy to pinpoint product defects. In this paper, we confirm the loss rate of deep learning training based on disc-shaped artificial scaffold images. It is intended to compare the performance of Cross-Entropy functions used in object detection algorithms. The model was constructed using normal, defective artificial scaffold images and category cross entropy and sparse category cross entropy. The data was repeatedly learned five times using each loss function. The average loss rate, average accuracy, final loss rate, and final accuracy according to the loss function were confirmed.

  • PDF

Comparison of Traditional Workloads and Deep Learning Workloads in Memory Read and Write Operations

  • Jeongha Lee;Hyokyung Bahn
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.164-170
    • /
    • 2023
  • With the recent advances in AI (artificial intelligence) and HPC (high-performance computing) technologies, deep learning is proliferated in various domains of the 4th industrial revolution. As the workload volume of deep learning increasingly grows, analyzing the memory reference characteristics becomes important. In this article, we analyze the memory reference traces of deep learning workloads in comparison with traditional workloads specially focusing on read and write operations. Based on our analysis, we observe some unique characteristics of deep learning memory references that are quite different from traditional workloads. First, when comparing instruction and data references, instruction reference accounts for a little portion in deep learning workloads. Second, when comparing read and write, write reference accounts for a majority of memory references, which is also different from traditional workloads. Third, although write references are dominant, it exhibits low reference skewness compared to traditional workloads. Specifically, the skew factor of write references is small compared to traditional workloads. We expect that the analysis performed in this article will be helpful in efficiently designing memory management systems for deep learning workloads.

A Survey on Vision Transformers for Object Detection Task (객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구)

  • Jungmin, Ha;Hyunjong, Lee;Jungmin, Eom;Jaekoo, Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

A Deep Learning Performance Comparison of R and Tensorflow (R과 텐서플로우 딥러닝 성능 비교)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.487-494
    • /
    • 2023
  • In this study, performance comparison was performed on R and TensorFlow, which are free deep learning tools. In the experiment, six types of deep neural networks were built using each tool, and the neural networks were trained using the 10-year Korean temperature dataset. The number of nodes in the input layer of the constructed neural network was set to 10, the number of output layers was set to 5, and the hidden layer was set to 5, 10, and 20 to conduct experiments. The dataset includes 3600 temperature data collected from Gangnam-gu, Seoul from March 1, 2013 to March 29, 2023. For performance comparison, the future temperature was predicted for 5 days using the trained neural network, and the root mean square error (RMSE) value was measured using the predicted value and the actual value. Experiment results shows that when there was one hidden layer, the learning error of R was 0.04731176, and TensorFlow was measured at 0.06677193, and when there were two hidden layers, R was measured at 0.04782134 and TensorFlow was measured at 0.05799060. Overall, R was measured to have better performance. We tried to solve the difficulties in tool selection by providing quantitative performance information on the two tools to users who are new to machine learning.

Comparison of long-term forecasting performance of export growth rate using time series analysis models and machine learning analysis (시계열 분석 모형 및 머신 러닝 분석을 이용한 수출 증가율 장기예측 성능 비교)

  • Seong-Hwi Nam
    • Korea Trade Review
    • /
    • v.46 no.6
    • /
    • pp.191-209
    • /
    • 2021
  • In this paper, various time series analysis models and machine learning models are presented for long-term prediction of export growth rate, and the prediction performance is compared and reviewed by RMSE and MAE. Export growth rate is one of the major economic indicators to evaluate the economic status. And It is also used to predict economic forecast. The export growth rate may have a negative (-) value as well as a positive (+) value. Therefore, Instead of using the ReLU function, which is often used for time series prediction of deep learning models, the PReLU function, which can have a negative (-) value as an output value, was used as the activation function of deep learning models. The time series prediction performance of each model for three types of data was compared and reviewed. The forecast data of long-term prediction of export growth rate was deduced by three forecast methods such as a fixed forecast method, a recursive forecast method and a rolling forecast method. As a result of the forecast, the traditional time series analysis model, ARDL, showed excellent performance, but as the time period of learning data increases, the performance of machine learning models including LSTM was relatively improved.

Performance Comparison Analysis of AI Supervised Learning Methods of Tensorflow and Scikit-Learn in the Writing Digit Data (필기숫자 데이터에 대한 텐서플로우와 사이킷런의 인공지능 지도학습 방식의 성능비교 분석)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.701-706
    • /
    • 2019
  • The advent of the AI(: Artificial Intelligence) has applied to many industrial and general applications have havingact on our lives these days. Various types of machine learning methods are supported in this field. The supervised learning method of the machine learning has features and targets as an input in the learning process. There are many supervised learning methods as well and their performance varies depends on the characteristics and states of the big data type as an input data. Therefore, in this paper, in order to compare the performance of the various supervised learning method with a specific big data set, the supervised learning methods supported in the Tensorflow and the Sckit-Learn are simulated and analyzed in the Jupyter Notebook environment with python.