• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.032 seconds

Development of Artificial Intelligence Janggi Game based on Machine Learning Algorithm (기계학습 알고리즘 기반의 인공지능 장기 게임 개발)

  • Jang, Myeonggyu;Kim, Youngho;Min, Dongyeop;Park, Kihyeon;Lee, Seungsoo;Woo, Chongwoo
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.137-148
    • /
    • 2017
  • Researches on the Artificial Intelligence has been explosively activated in various fields since the advent of AlphaGo. Particularly, researchers on the application of multi-layer neural network such as deep learning, and various machine learning algorithms are being focused actively. In this paper, we described a development of an artificial intelligence Janggi game based on reinforcement learning algorithm and MCTS (Monte Carlo Tree Search) algorithm with accumulated game data. The previous artificial intelligence games are mostly developed based on mini-max algorithm, which depends only on the results of the tree search algorithms. They cannot use of the real data from the games experts, nor cannot enhance the performance by learning. In this paper, we suggest our approach to overcome those limitations as follows. First, we collects Janggi expert's game data, which can reflect abundant real game results. Second, we create a graph structure by using the game data, which can remove redundant movement. And third, we apply the reinforcement learning algorithm and MCTS algorithm to select the best next move. In addition, the learned graph is stored by object serialization method to provide continuity of the game. The experiment of this study is done with two different types as follows. First, our system is confronted with other AI based system that is currently being served on the internet. Second, our system confronted with some Janggi experts who have winning records of more than 50%. Experimental results show that the rate of our system is significantly higher.

An Automatic Access Registration System using Beacon and Deep Learning Technology (비콘과 딥러닝 기술을 활용한 전자출입명부 자동등록시스템)

  • Huh, Ji-Won;Ohm, Seong-Yong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.807-812
    • /
    • 2020
  • In order to prevent the national wide spread of the COVID-19 virus, the government enforces to use an electronic access registration system for public facilities to effectively track and manage the spread. Initially, there was a lot of hassle to write a directory, but recently a system for creating an electronic access list using QR codes, what is called KI-Pass, is mainly used. However, the procedure for generating a QR code is somewhat cumbersome. In this paper, we propose a new electronic access registration system that does not require QR code. This system effectively controls the suspicious visitor by using a mask wearing discriminator which has been implemented using deep learning technology, and a non-contact thermometer package. In addition, by linking the beacon, a short-range wireless communication technology, and the visitor's smartphone application, basic information of the facility visitor is automatically registered to KDCA through the server. On the other hand, the user access information registered in the server is encrypted and stored, and is automatically destroyed after up to 4 weeks. This system is expected to be very effective in preventing the spread of other new infectious diseases as well as responding to the coronavirus which is recording a high spread worldwide.

Anomaly Detections Model of Aviation System by CNN (합성곱 신경망(CNN)을 활용한 항공 시스템의 이상 탐지 모델 연구)

  • Hyun-Jae Im;Tae-Rim Kim;Jong-Gyu Song;Bum-Su Kim
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.4
    • /
    • pp.67-74
    • /
    • 2023
  • Recently, Urban Aircraft Mobility (UAM) has been attracting attention as a transportation system of the future, and small drones also play a role in various industries. The failure of various types of aviation systems can lead to crashes, which can result in significant property damage or loss of life. In the defense industry, where aviation systems are widely used, the failure of aviation systems can lead to mission failure. Therefore, this study proposes an anomaly detection model using deep learning technology to detect anomalies in aviation systems to improve the reliability of development and production, and prevent accidents during operation. As training and evaluating data sets, current data from aviation systems in an extremely low-temperature environment was utilized, and a deep learning network was implemented using the convolutional neural network, which is a deep learning technique that is commonly used for image recognition. In an extremely low-temperature environment, various types of failure occurred in the system's internal sensors and components, and singular points in current data were observed. As a result of training and evaluating the model using current data in the case of system failure and normal, it was confirmed that the abnormality was detected with a recall of 98 % or more.

Design of a designated lane enforcement system based on deep learning (딥러닝 기반 지정차로제 단속 시스템 설계)

  • Bae, Ga-hyeong;Jang, Jong-wook;Jang, Sung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.236-238
    • /
    • 2022
  • According to the current Road Traffic Act, the 2020 amendment bill is currently in effect as a system that designates vehicle types for each lane for the purpose of securing road use efficiency and traffic safety. When comparing the number of traffic accident fatalities per 10,000 vehicles in Germany and Korea, the number of traffic accident deaths in Germany is significantly lower than in Korea. The representative case of the German autobahn, which did not impose a speed limit, suggests that Korea's speeding laws are not the only answer to reducing the accident rate. The designated lane system, which is observed in accordance with the keep right principle of the Autobahn Expressway, plays a major role in reducing traffic accidents. Based on this fact, we propose a traffic enforcement system to crack down on vehicles violating the designated lane system and improve the compliance rate. We develop a designated lane enforcement system that recognizes vehicle types using Yolo5, a deep learning object recognition model, recognizes license plates and lanes using OpenCV, and stores the extracted data in the server to determine whether or not laws are violated.Accordingly, it is expected that there will be an effect of reducing the traffic accident rate through the improvement of driver's awareness and compliance rate.

  • PDF

Network Anomaly Detection Technologies Using Unsupervised Learning AutoEncoders (비지도학습 오토 엔코더를 활용한 네트워크 이상 검출 기술)

  • Kang, Koohong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.617-629
    • /
    • 2020
  • In order to overcome the limitations of the rule-based intrusion detection system due to changes in Internet computing environments, the emergence of new services, and creativity of attackers, network anomaly detection (NAD) using machine learning and deep learning technologies has received much attention. Most of these existing machine learning and deep learning technologies for NAD use supervised learning methods to learn a set of training data set labeled 'normal' and 'attack'. This paper presents the feasibility of the unsupervised learning AutoEncoder(AE) to NAD from data sets collecting of secured network traffic without labeled responses. To verify the performance of the proposed AE mode, we present the experimental results in terms of accuracy, precision, recall, f1-score, and ROC AUC value on the NSL-KDD training and test data sets. In particular, we model a reference AE through the deep analysis of diverse AEs varying hyper-parameters such as the number of layers as well as considering the regularization and denoising effects. The reference model shows the f1-scores 90.4% and 89% of binary classification on the KDDTest+ and KDDTest-21 test data sets based on the threshold of the 82-th percentile of the AE reconstruction error of the training data set.

A Real-Time Sound Recognition System with a Decision Logic of Random Forest for Robots (Random Forest를 결정로직으로 활용한 로봇의 실시간 음향인식 시스템 개발)

  • Song, Ju-man;Kim, Changmin;Kim, Minook;Park, Yongjin;Lee, Seoyoung;Son, Jungkwan
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.273-281
    • /
    • 2022
  • In this paper, we propose a robot sound recognition system that detects various sound events. The proposed system is designed to detect various sound events in real-time by using a microphone on a robot. To get real-time performance, we use a VGG11 model which includes several convolutional neural networks with real-time normalization scheme. The VGG11 model is trained on augmented DB through 24 kinds of various environments (12 reverberation times and 2 signal to noise ratios). Additionally, based on random forest algorithm, a decision logic is also designed to generate event signals for robot applications. This logic can be used for specific classes of acoustic events with better performance than just using outputs of network model. With some experimental results, the performance of proposed sound recognition system is shown on real-time device for robots.

Anomaly Data Detection Using Machine Learning in Crowdsensing System (크라우드센싱 시스템에서 머신러닝을 이용한 이상데이터 탐지)

  • Kim, Mihui;Lee, Gihun
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.475-485
    • /
    • 2020
  • Recently, a crowdsensing system that provides a new sensing service with real-time sensing data provided from a user's device including a sensor without installing a separate sensor has attracted attention. In the crowdsensing system, meaningless data may be provided due to a user's operation error or communication problem, or false data may be provided to obtain compensation. Therefore, the detection and removal of the abnormal data determines the quality of the crowdsensing service. The proposed methods in the past to detect these anomalies are not efficient for the fast-changing environment of crowdsensing. This paper proposes an anomaly data detection method by extracting the characteristics of continuously and rapidly changing sensing data environment by using machine learning technology and modeling it with an appropriate algorithm. We show the performance and feasibility of the proposed system using deep learning binary classification model of supervised learning and autoencoder model of unsupervised learning.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

An Implementation of Feeding Time Detection System for Smart Fish Farm Using Deep Neural Network (심층신경망을 이용한 스마트 양식장용 사료 공급 시점 감지 시스템 구현)

  • Joo-Hyeon Jeon;Yoon-Ho Lee;Moon G. Joo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.1
    • /
    • pp.19-24
    • /
    • 2023
  • In traditional fish farming way, the workers have to observe all of the pools every time and every day to feed at the right timing. This method causes tremendous stress on workers and wastes time. To solve this problem, we implemented an automatic detection system for feeding time using deep neural network. The detection system consists of two steps: classification of the presence or absence of feed and checking DO (Dissolved Oxygen) of the pool. For the classification, the pretrained ResNet18 model and transfer learning with custom dataset are used. DO is obtained from the DO sensor in the pool through HTTP in real time. For better accuracy, the next step, checking DO proceeds when the result of the classification is absence of feed several times in a row. DO is checked if it is higher than a DO reference value that is set by the workers. These actions are performed automatically in the UI programs developed with LabVIEW.

Autonomous-Driving Vehicle Learning Environments using Unity Real-time Engine and End-to-End CNN Approach (유니티 실시간 엔진과 End-to-End CNN 접근법을 이용한 자율주행차 학습환경)

  • Hossain, Sabir;Lee, Deok-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.122-130
    • /
    • 2019
  • Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.