• Title/Summary/Keyword: Image deep learning

Search Result 1,776, Processing Time 0.034 seconds

Applicability Evaluation for Discharge Model Using Curve Number and Convolution Neural Network (Curve Number 및 Convolution Neural Network를 이용한 유출모형의 적용성 평가)

  • Song, Chul Min;Lee, Kwang Hyun
    • Ecology and Resilient Infrastructure
    • /
    • v.7 no.2
    • /
    • pp.114-125
    • /
    • 2020
  • Despite the various artificial neural networks that have been developed, most of the discharge models in previous studies have been developed using deep neural networks. This study aimed to develop a discharge model using a convolution neural network (CNN), which was used to solve classification problems. Furthermore, the applicability of CNN was evaluated. The photographs (pictures or images) for input data to CNN could not clearly show the characteristics of the study area as well as precipitation. Hence, the model employed in this study had to use numerical images. To solve the problem, the CN of NRCS was used to generate images as input data for the model. The generated images showed a good possibility of applicability as input data. Moreover, a new application of CN, which had been used only for discharge prediction, was proposed in this study. As a result of CNN training, the model was trained and generalized stably. Comparison between the actual and predicted values had an R2 of 0.79, which was relatively high. The model showed good performance in terms of the Pearson correlation coefficient (0.84), the Nash-Sutcliffe efficiency (NSE) (0.63), and the root mean square error (24.54 ㎥/s).

Semantic Segmentation of the Habitats of Ecklonia Cava and Sargassum in Undersea Images Using HRNet-OCR and Swin-L Models (HRNet-OCR과 Swin-L 모델을 이용한 조식동물 서식지 수중영상의 의미론적 분할)

  • Kim, Hyungwoo;Jang, Seonwoong;Bak, Suho;Gong, Shinwoo;Kwak, Jiwoo;Kim, Jinsoo;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.913-924
    • /
    • 2022
  • In this paper, we presented a database construction of undersea images for the Habitats of Ecklonia cava and Sargassum and conducted an experiment for semantic segmentation using state-of-the-art (SOTA) models such as High Resolution Network-Object Contextual Representation (HRNet-OCR) and Shifted Windows-L (Swin-L). The result showed that our segmentation models were superior to the existing experiments in terms of the 29% increased mean intersection over union (mIOU). Swin-L model produced better performance for every class. In particular, the information of the Ecklonia cava class that had small data were also appropriately extracted by Swin-L model. Target objects and the backgrounds were well distinguished owing to the Transformer backbone better than the legacy models. A bigger database under construction will ensure more accuracy improvement and can be utilized as deep learning database for undersea images.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Dual CNN Structured Sound Event Detection Algorithm Based on Real Life Acoustic Dataset (실생활 음향 데이터 기반 이중 CNN 구조를 특징으로 하는 음향 이벤트 인식 알고리즘)

  • Suh, Sangwon;Lim, Wootaek;Jeong, Youngho;Lee, Taejin;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.855-865
    • /
    • 2018
  • Sound event detection is one of the research areas to model human auditory cognitive characteristics by recognizing events in an environment with multiple acoustic events and determining the onset and offset time for each event. DCASE, a research group on acoustic scene classification and sound event detection, is proceeding challenges to encourage participation of researchers and to activate sound event detection research. However, the size of the dataset provided by the DCASE Challenge is relatively small compared to ImageNet, which is a representative dataset for visual object recognition, and there are not many open sources for the acoustic dataset. In this study, the sound events that can occur in indoor and outdoor are collected on a larger scale and annotated for dataset construction. Furthermore, to improve the performance of the sound event detection task, we developed a dual CNN structured sound event detection system by adding a supplementary neural network to a convolutional neural network to determine the presence of sound events. Finally, we conducted a comparative experiment with both baseline systems of the DCASE 2016 and 2017.

Evaluating the Effectiveness of an Artificial Intelligence Model for Classification of Basic Volcanic Rocks Based on Polarized Microscope Image (편광현미경 이미지 기반 염기성 화산암 분류를 위한 인공지능 모델의 효용성 평가)

  • Sim, Ho;Jung, Wonwoo;Hong, Seongsik;Seo, Jaewon;Park, Changyun;Song, Yungoo
    • Economic and Environmental Geology
    • /
    • v.55 no.3
    • /
    • pp.309-316
    • /
    • 2022
  • In order to minimize the human and time consumption required for rock classification, research on rock classification using artificial intelligence (AI) has recently developed. In this study, basic volcanic rocks were subdivided by using polarizing microscope thin section images. A convolutional neural network (CNN) model based on Tensorflow and Keras libraries was self-producted for rock classification. A total of 720 images of olivine basalt, basaltic andesite, olivine tholeiite, trachytic olivine basalt reference specimens were mounted with open nicol, cross nicol, and adding gypsum plates, and trained at the training : test = 7 : 3 ratio. As a result of machine learning, the classification accuracy was over 80-90%. When we confirmed the classification accuracy of each AI model, it is expected that the rock classification method of this model will not be much different from the rock classification process of a geologist. Furthermore, if not only this model but also models that subdivide more diverse rock types are produced and integrated, the AI model that satisfies both the speed of data classification and the accessibility of non-experts can be developed, thereby providing a new framework for basic petrology research.

Detecting Vehicles That Are Illegally Driving on Road Shoulders Using Faster R-CNN (Faster R-CNN을 이용한 갓길 차로 위반 차량 검출)

  • Go, MyungJin;Park, Minju;Yeo, Jiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.105-122
    • /
    • 2022
  • According to the statistics about the fatal crashes that have occurred on the expressways for the last 5 years, those who died on the shoulders of the road has been as 3 times high as the others who died on the expressways. It suggests that the crashes on the shoulders of the road should be fatal, and that it would be important to prevent the traffic crashes by cracking down on the vehicles intruding the shoulders of the road. Therefore, this study proposed a method to detect a vehicle that violates the shoulder lane by using the Faster R-CNN. The vehicle was detected based on the Faster R-CNN, and an additional reading module was configured to determine whether there was a shoulder violation. For experiments and evaluations, GTAV, a simulation game that can reproduce situations similar to the real world, was used. 1,800 images of training data and 800 evaluation data were processed and generated, and the performance according to the change of the threshold value was measured in ZFNet and VGG16. As a result, the detection rate of ZFNet was 99.2% based on Threshold 0.8 and VGG16 93.9% based on Threshold 0.7, and the average detection speed for each model was 0.0468 seconds for ZFNet and 0.16 seconds for VGG16, so the detection rate of ZFNet was about 7% higher. The speed was also confirmed to be about 3.4 times faster. These results show that even in a relatively uncomplicated network, it is possible to detect a vehicle that violates the shoulder lane at a high speed without pre-processing the input image. It suggests that this algorithm can be used to detect violations of designated lanes if sufficient training datasets based on actual video data are obtained.

A Study on the Construction Equipment Object Extraction Model Based on Computer Vision Technology (컴퓨터 비전 기술 기반 건설장비 객체 추출 모델 적용 분석 연구)

  • Sungwon Kang;Wisung Yoo;Yoonseok Shin
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.916-923
    • /
    • 2023
  • Purpose: Looking at the status of fatal accidents in the construction industry in the 2022 Industrial Accident Status Supplementary Statistics, 27.8% of all fatal accidents in the construction industry are caused by construction equipment. In order to overcome the limitations of tours and inspections caused by the enlargement of sites and high-rise buildings, we plan to build a model that can extract construction equipment using computer vision technology and analyze the model's accuracy and field applicability. Method: In this study, deep learning is used to learn image data from excavators, dump trucks, and mobile cranes among construction equipment, and then the learning results are evaluated and analyzed and applied to construction sites. Result: At site 'A', objects of excavators and dump trucks were extracted, and the average extraction accuracy was 81.42% for excavators and 78.23% for dump trucks. The mobile crane at site 'B' showed an average accuracy of 78.14%. Conclusion: It is believed that the efficiency of on-site safety management can be increased and the risk factors for disaster occurrence can be minimized. In addition, based on this study, it can be used as basic data on the introduction of smart construction technology at construction sites.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.

Performance Analysis of Object Detection Neural Network According to Compression Ratio of RGB and IR Images (RGB와 IR 영상의 압축률에 따른 객체 탐지 신경망 성능 분석)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Lee, Hee Kyung;Choo, Hyon-Gon;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.155-166
    • /
    • 2021
  • Most object detection algorithms are studied based on RGB images. Because the RGB cameras are capturing images based on light, however, the object detection performance is poor when the light condition is not good, e.g., at night or foggy days. On the other hand, high-quality infrared(IR) images regardless of weather condition and light can be acquired because IR images are captured by an IR sensor that makes images with heat information. In this paper, we performed the object detection algorithm based on the compression ratio in RGB and IR images to show the detection capabilities. We selected RGB and IR images that were taken at night from the Free FLIR Thermal dataset for the ADAS(Advanced Driver Assistance Systems) research. We used the pre-trained object detection network for RGB images and a fine-tuned network that is tuned based on night RGB and IR images. Experimental results show that higher object detection performance can be acquired using IR images than using RGB images in both networks.