• Title/Summary/Keyword: Image Deep Learning

Search Result 1,797, Processing Time 0.03 seconds

Trends of Plant Image Processing Technology (이미지 기반의 식물 인식 기술 동향)

  • Yoon, Y.C.;Sang, J.H.;Park, S.M.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.4
    • /
    • pp.54-60
    • /
    • 2018
  • In this paper, we analyze the trends of deep-learning based plant data processing technologies. In recent years, the deep-learning technology has been widely applied to various AI tasks, such as vision (image classification, image segmentation, and so on) and natural language processing because it shows a higher performance on such tasks. The deep-leaning method is also applied to plant data processing tasks and shows a significant performance. We analyze and show how the deep-learning method is applied to plant data processing tasks and related industries.

Introduction to convolutional neural network using Keras; an understanding from a statistician

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.591-610
    • /
    • 2019
  • Deep Learning is one of the machine learning methods to find features from a huge data using non-linear transformation. It is now commonly used for supervised learning in many fields. In particular, Convolutional Neural Network (CNN) is the best technique for the image classification since 2012. For users who consider deep learning models for real-world applications, Keras is a popular API for neural networks written in Python and also can be used in R. We try examine the parameter estimation procedures of Deep Neural Network and structures of CNN models from basics to advanced techniques. We also try to figure out some crucial steps in CNN that can improve image classification performance in the CIFAR10 dataset using Keras. We found that several stacks of convolutional layers and batch normalization could improve prediction performance. We also compared image classification performances with other machine learning methods, including K-Nearest Neighbors (K-NN), Random Forest, and XGBoost, in both MNIST and CIFAR10 dataset.

Garbage Dumping Detection System using Articular Point Deep Learning (관절점 딥러닝을 이용한 쓰레기 무단 투기 적발 시스템)

  • MIN, Hye Won;LEE, Hyoung Gu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1508-1517
    • /
    • 2021
  • In CCTV environments, a lot of learning image data is required to monitor illegal dumping of garbage with a typical image-based object detection using deep learning method. In this paper, we propose a system to monitor unauthorized dumping of garbage by learning the articular points of the person using only a small number of images without immediate use of the image for deep learning. In experiment, the proposed system showed 74.97% of garbage dumping detection performance with only a relatively small amount of image data in CCTV environments.

Development of deep learning-based rock classifier for elementary, middle and high school education (초중고 교육을 위한 딥러닝 기반 암석 분류기 개발)

  • Park, Jina;Yong, Hwan-Seung
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.1
    • /
    • pp.63-70
    • /
    • 2019
  • These days, as Interest in Image recognition with deep learning is increasing, there has been a lot of research in image recognition using deep learning. In this study, we propose a system for classifying rocks through rock images of 18 types of rock(6 types of igneous, 6 types of metamorphic, 6 types of sedimentary rock) which are addressed in the high school curriculum, using CNN model based on Tensorflow, deep learning open source framework. As a result, we developed a classifier to distinguish rocks by learning the images of rocks and confirmed the classification performance of rock classifier. Finally, through the mobile application implemented, students can use the application as a learning tool in classroom or on-site experience.

Implementation of Image Semantic Segmentation on Android Device using Deep Learning (딥-러닝을 활용한 안드로이드 플랫폼에서의 이미지 시맨틱 분할 구현)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.88-91
    • /
    • 2020
  • Image segmentation is the task of partitioning an image into multiple sets of pixels based on some characteristics. The objective is to simplify the image into a representation that is more meaningful and easier to analyze. In this paper, we apply deep-learning to pre-train the learning model, and implement an algorithm that performs image segmentation in real time by extracting frames for the stream input from the Android device. Based on the open source of DeepLab-v3+ implemented in Tensorflow, some convolution filters are modified to improve real-time operation on the Android platform.

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

Image analysis technology with deep learning for monitoring the tidal flat ecosystem -Focused on monitoring the Ocypode stimpsoni Ortmann, 1897 in the Sindu-ri tidal flat - (갯벌 생태계 모니터링을 위한 딥러닝 기반의 영상 분석 기술 연구 - 신두리 갯벌 달랑게 모니터링을 중심으로 -)

  • Kim, Dong-Woo;Lee, Sang-Hyuk;Yu, Jae-Jin;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.24 no.6
    • /
    • pp.89-96
    • /
    • 2021
  • In this study, a deep-learning image analysis model was established and validated for AI-based monitoring of the tidal flat ecosystem for marine protected creatures Ocypode stimpsoni and their habitat. The data in the study was constructed using an unmanned aerial vehicle, and the U-net model was applied for the deep learning model. The accuracy of deep learning model learning results was about 0.76 and about 0.8 each for the Ocypode stimpsoni and their burrow whose accuracy was higher. Analyzing the distribution of crabs and burrows by putting orthomosaic images of the entire study area to the learned deep learning model, it was confirmed that 1,943 Ocypode stimpsoni and 2,807 burrow were distributed in the study area. Through this study, the possibility of using the deep learning image analysis technology for monitoring the tidal ecosystem was confirmed. And it is expected that it can be used in the tidal ecosystem monitoring field by expanding the monitoring sites and target species in the future.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

Image-based rainfall prediction from a novel deep learning method

  • Byun, Jongyun;Kim, Jinwon;Jun, Changhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.183-183
    • /
    • 2021
  • Deep learning methods and their application have become an essential part of prediction and modeling in water-related research areas, including hydrological processes, climate change, etc. It is known that application of deep learning leads to high availability of data sources in hydrology, which shows its usefulness in analysis of precipitation, runoff, groundwater level, evapotranspiration, and so on. However, there is still a limitation on microclimate analysis and prediction with deep learning methods because of deficiency of gauge-based data and shortcomings of existing technologies. In this study, a real-time rainfall prediction model was developed from a sky image data set with convolutional neural networks (CNNs). These daily image data were collected at Chung-Ang University and Korea University. For high accuracy of the proposed model, it considers data classification, image processing, ratio adjustment of no-rain data. Rainfall prediction data were compared with minutely rainfall data at rain gauge stations close to image sensors. It indicates that the proposed model could offer an interpolation of current rainfall observation system and have large potential to fill an observation gap. Information from small-scaled areas leads to advance in accurate weather forecasting and hydrological modeling at a micro scale.

  • PDF

A Study on Deep Learning Structure of Multi-Block Method for Improving Face Recognition (얼굴 인식률 향상을 위한 멀티 블록 방식의 딥러닝 구조에 관한 연구)

  • Ra, Seung-Tak;Kim, Hong-Jik;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.933-940
    • /
    • 2018
  • In this paper, we propose a multi-block deep learning structure for improving face recognition rate. The recognition structure of the proposed deep learning consists of three steps: multi-blocking of the input image, multi-block selection by facial feature numerical analysis, and perform deep learning of the selected multi-block. First, the input image is divided into 4 blocks by multi-block. Secondly, in the multi-block selection by feature analysis, the feature values of the quadruple multi-blocks are checked, and only the blocks with many features are selected. The third step is to perform deep learning with the selected multi-block, and the result is obtained as an efficient block with high feature value by performing recognition on the deep learning model in which the selected multi-block part is learned. To evaluate the performance of the proposed deep learning structure, we used CAS-PEAL face database. Experimental results show that the proposed multi-block deep learning structure shows 2.3% higher face recognition rate than the existing deep learning structure.