• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.03 seconds

Aspect-based Sentiment Analysis of Product Reviews using Multi-agent Deep Reinforcement Learning

  • M. Sivakumar;Srinivasulu Reddy Uyyala
    • Asia pacific journal of information systems
    • /
    • v.32 no.2
    • /
    • pp.226-248
    • /
    • 2022
  • The existing model for sentiment analysis of product reviews learned from past data and new data was labeled based on training. But new data was never used by the existing system for making a decision. The proposed Aspect-based multi-agent Deep Reinforcement learning Sentiment Analysis (ADRSA) model learned from its very first data without the help of any training dataset and labeled a sentence with aspect category and sentiment polarity. It keeps on learning from the new data and updates its knowledge for improving its intelligence. The decision of the proposed system changed over time based on the new data. So, the accuracy of the sentiment analysis using deep reinforcement learning was improved over supervised learning and unsupervised learning methods. Hence, the sentiments of premium customers on a particular site can be explored to other customers effectively. A dynamic environment with a strong knowledge base can help the system to remember the sentences and usage State Action Reward State Action (SARSA) algorithm with Bidirectional Encoder Representations from Transformers (BERT) model improved the performance of the proposed system in terms of accuracy when compared to the state of art methods.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.

Study on Automatic Bug Triage using Deep Learning (딥 러닝을 이용한 버그 담당자 자동 배정 연구)

  • Lee, Sun-Ro;Kim, Hye-Min;Lee, Chan-Gun;Lee, Ki-Seong
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1156-1164
    • /
    • 2017
  • Existing studies on automatic bug triage were mostly used the method of designing the prediction system based on the machine learning algorithm. Therefore, it can be said that applying a high-performance machine learning model is the core of the performance of the automatic bug triage system. In the related research, machine learning models that have high performance are mainly used, such as SVM and Naïve Bayes. In this paper, we apply Deep Learning, which has recently shown good performance in the field of machine learning, to automatic bug triage and evaluate its performance. Experimental results show that the Deep Learning based Bug Triage system achieves 48% accuracy in active developer experiments, un improvement of up to 69% over than conventional machine learning techniques.

Path selection algorithm for multi-path system based on deep Q learning (Deep Q 학습 기반의 다중경로 시스템 경로 선택 알고리즘)

  • Chung, Byung Chang;Park, Heasook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.50-55
    • /
    • 2021
  • Multi-path system is a system in which utilizes various networks simultaneously. It is expected that multi-path system can enhance communication speed, reliability, security of network. In this paper, we focus on path selection in multi-path system. To select optimal path, we propose deep reinforcement learning algorithm which is rewarded by the round-trip-time (RTT) of each networks. Unlike multi-armed bandit model, deep Q learning is applied to consider rapidly changing situations. Due to the delay of RTT data, we also suggest compensation algorithm of the delayed reward. Moreover, we implement testbed learning server to evaluate the performance of proposed algorithm. The learning server contains distributed database and tensorflow module to efficiently operate deep learning algorithm. By means of simulation, we showed that the proposed algorithm has better performance than lowest RTT about 20%.

A DASH System Using the A3C-based Deep Reinforcement Learning (A3C 기반의 강화학습을 사용한 DASH 시스템)

  • Choi, Minje;Lim, Kyungshik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.297-307
    • /
    • 2022
  • The simple procedural segment selection algorithm commonly used in Dynamic Adaptive Streaming over HTTP (DASH) reveals severe weakness to provide high-quality streaming services in the integrated mobile networks of various wired and wireless links. A major issue could be how to properly cope with dynamically changing underlying network conditions. The key to meet it should be to make the segment selection algorithm much more adaptive to fluctuation of network traffics. This paper presents a system architecture that replaces the existing procedural segment selection algorithm with a deep reinforcement learning algorithm based on the Asynchronous Advantage Actor-Critic (A3C). The distributed A3C-based deep learning server is designed and implemented to allow multiple clients in different network conditions to stream videos simultaneously, collect learning data quickly, and learn asynchronously, resulting in greatly improved learning speed as the number of video clients increases. The performance analysis shows that the proposed algorithm outperforms both the conventional DASH algorithm and the Deep Q-Network algorithm in terms of the user's quality of experience and the speed of deep learning.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

Lifetime Extension Method for Non-Volatile Memory based Deep Learning System by analyzing Data Write Pattern (데이터 쓰기 패턴 분석을 통한 비휘발성 메모리 기반 딥러닝 시스템의 수명 연장 기법)

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.1-6
    • /
    • 2022
  • Modern computer systems usually have special hardware for operations used in deep learning workload even edge computing environment. Non-volatile memories (NVMs) have been considered for alternative memory storage because they consume little static energy and occupy small area. However, there is a problem for NVMs to be directly adopted. An NVM cell has limited write endurance, so that the lifetime of NVM-based memory system is much shorter than that of conventional memory system. To overcome this problem for the deep learning system, this paper proposes a novel method to extend the lifetime based on the analysis of the deep learning workloads. If an incoming block has more than a predefined number of frequently used values, the cacheline is defined as write friendly block. During the victim selection, the cacheline has lower possibility to be chosen as victim. The experimental results show that the lifetime is increased by about 50% and energy consumption is decreased by 3% with a little performance hurt.

Deep-learning based In-situ Monitoring and Prediction System for the Organic Light Emitting Diode

  • Park, Il-Hoo;Cho, Hyeran;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.126-129
    • /
    • 2020
  • We introduce a lifetime assessment technique using deep learning algorithm with complex electrical parameters such as resistivity, permittivity, impedance parameters as integrated indicators for predicting the degradation of the organic molecules. The evaluation system consists of fully automated in-situ measurement system and multiple layer perceptron learning system with five hidden layers and 1011 perceptra in each layer. Prediction accuracies are calculated and compared depending on the physical feature, learning hyperparameters. 62.5% of full time-series data are used for training and its prediction accuracy is estimated as r-square value of 0.99. Remaining 37.5% of the data are used for testing with prediction accuracy of 0.95. With k-fold cross-validation, the stability to the instantaneous changes in the measured data is also improved.

Deep Learning System based on Morphological Neural Network (몰포러지 신경망 기반 딥러닝 시스템)

  • Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.1
    • /
    • pp.92-98
    • /
    • 2019
  • In this paper, we propose a deep learning system based on morphological neural network(MNN). The deep learning layers are morphological operation layer, pooling layer, ReLU layer, and the fully connected layer. The operations used in morphological layer are erosion, dilation, and edge detection, etc. Unlike CNN, the number of hidden layers and kernels applied to each layer is limited in MNN. Because of the reduction of processing time and utility of VLSI chip design, it is possible to apply MNN to various mobile embedded systems. MNN performs the edge and shape detection operations with a limited number of kernels. Through experiments using database images, it is confirmed that MNN can be used as a deep learning system and its performance.

Design and Implementation of a Face Authentication System (딥러닝 기반의 얼굴인증 시스템 설계 및 구현)

  • Lee, Seungik
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.63-68
    • /
    • 2020
  • This paper proposes a face authentication system based on deep learning framework. The proposed system is consisted of face region detection and feature extraction using deep learning algorithm, and performed the face authentication using joint-bayesian matrix learning algorithm. The performance of proposed paper is evaluated by various face database , and the face image of one person consists of 2 images. The face authentication algorithm was performed by measuring similarity by applying 2048 dimension characteristic and combined Bayesian algorithm through Deep Neural network and calculating the same error rate that failed face certification. The result of proposed paper shows that the proposed system using deep learning and joint bayesian algorithms showed the equal error rate of 1.2%, and have a good performance compared to previous approach.