• Title/Summary/Keyword: Video Data Augmentation

Search Result 19, Processing Time 0.022 seconds

Human Instance Segmentation using Video Data Augmentation (비디오 데이터 보강을 이용한 인물 개체 분할)

  • Chun, Hyun-Jin;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.532-534
    • /
    • 2022
  • 본 논문에서는 미생 드라마 비디오들을 토대로 구축한 비디오 인물 개체 분할 데이터 집합인 MHIS를 소개하고, 등장인물 클래스 간의 심각한 데이터 불균형 문제를 효과적으로 해결하기 위한 새로운 비디오 데이터 보강 기법인 CDVA를 제안한다. 기존의 비디오 데이터 보강 기법들과는 달리, 새로운 CDVA 보강 기법은 비디오의 시공간적 맥락을 충분히 고려해서 부족한 인물 클래스의 훈련 비디오 데이터들을 추가 생성함으로써, 비디오 개체 분할 신경망 모델의 성능을 효과적으로 개선시킬 수 있다. 본 논문에서는 정량 및 정성 실험들을 통해, 제안 비디오 데이터 보강 기법의 우수성을 입증한다.

Separation of Occluding Pigs using Deep Learning-based Image Processing Techniques (딥 러닝 기반의 영상처리 기법을 이용한 겹침 돼지 분리)

  • Lee, Hanhaesol;Sa, Jaewon;Shin, Hyunjun;Chung, Youngwha;Park, Daihee;Kim, Hakjae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.136-145
    • /
    • 2019
  • The crowded environment of a domestic pig farm is highly vulnerable to the spread of infectious diseases such as foot-and-mouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a camera. Although it is required to correctly separate occluding pigs for tracking each individual pigs, extracting the boundaries of the occluding pigs fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate occluding pigs not only by exploiting the characteristics (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the limitation (i.e., the bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with two-pigs occlusion patterns show that the proposed method can provide better accuracy and processing speed than one of the state-of-the-art widely used deep learning-based segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).

Deep Learning Based Pine Nut Detection in UAV Aerial Video (UAV 항공 영상에서의 딥러닝 기반 잣송이 검출)

  • Kim, Gyu-Min;Park, Sung-Jun;Hwang, Seung-Jun;Kim, Hee Yeong;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.1
    • /
    • pp.115-123
    • /
    • 2021
  • Pine nuts are Korea's representative nut forest products and profitable crops. However, pine nuts are harvested by climbing the trees themselves, thus the risk is high. In order to solve this problem, it is necessary to harvest pine nuts using a robot or an unmanned aerial vehicle(UAV). In this paper, we propose a deep learning based detection method for harvesting pine nut in UAV aerial images. For this, a video was recorded in a real pine forest using UAV, and a data augmentation technique was used to supplement a small number of data. As the data for 3D detection, Unity3D was used to model the virtual pine nut and the virtual environment, and the labeling was acquired using the 3D transformation method of the coordinate system. Deep learning algorithms for detection of pine nuts distribution area and 2D and 3D detection of pine nuts objects were used DeepLabV3+, YOLOv4, and CenterNet, respectively. As a result of the experiment, the detection rate of pine nuts distribution area was 82.15%, the 2D detection rate was 86.93%, and the 3D detection rate was 59.45%.

Deep Learning based Fish Object Detection and Tracking for Smart Aqua Farm (스마트 양식을 위한 딥러닝 기반 어류 검출 및 이동경로 추적)

  • Shin, Younghak;Choi, Jeong Hyeon;Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.552-560
    • /
    • 2021
  • Currently, the domestic aquaculture industry is pursuing smartization, but it is still proceeding with human subjective judgment in many processes in the aquaculture stage. The prerequisite for the smart aquaculture industry is to effectively grasp the condition of fish in the farm. If real-time monitoring is possible by identifying the number of fish populations, size, pathways, and speed of movement, various forms of automation such as automatic feed supply and disease determination can be carried out. In this study, we proposed an algorithm to identify the state of fish in real time using underwater video data. The fish detection performance was compared and evaluated by applying the latest deep learning-based object detection models, and an algorithm was proposed to measure fish object identification, path tracking, and moving speed in continuous image frames in the video using the fish detection results. The proposed algorithm showed 92% object detection performance (based on F1-score), and it was confirmed that it effectively tracks a large number of fish objects in real time on the actual test video. It is expected that the algorithm proposed in this paper can be effectively used in various smart farming technologies such as automatic feed feeding and fish disease prediction in the future.

Animal Face Classification using Dual Deep Convolutional Neural Network

  • Khan, Rafiul Hasan;Kang, Kyung-Won;Lim, Seon-Ja;Youn, Sung-Dae;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.525-538
    • /
    • 2020
  • A practical animal face classification system that classifies animals in image and video data is considered as a pivotal topic in machine learning. In this research, we are proposing a novel method of fully connected dual Deep Convolutional Neural Network (DCNN), which extracts and analyzes image features on a large scale. With the inclusion of the state of the art Batch Normalization layer and Exponential Linear Unit (ELU) layer, our proposed DCNN has gained the capability of analyzing a large amount of dataset as well as extracting more features than before. For this research, we have built our dataset containing ten thousand animal faces of ten animal classes and a dual DCNN. The significance of our network is that it has four sets of convolutional functions that work laterally with each other. We used a relatively small amount of batch size and a large number of iteration to mitigate overfitting during the training session. We have also used image augmentation to vary the shapes of the training images for the better learning process. The results demonstrate that, with an accuracy rate of 92.0%, the proposed DCNN outruns its counterparts while causing less computing costs.

AI Fire Detection & Notification System

  • Na, You-min;Hyun, Dong-hwan;Park, Do-hyun;Hwang, Se-hyun;Lee, Soo-hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.63-71
    • /
    • 2020
  • In this paper, we propose a fire detection technology using YOLOv3 and EfficientDet, the most reliable artificial intelligence detection algorithm recently, an alert service that simultaneously transmits four kinds of notifications: text, web, app and e-mail, and an AWS system that links fire detection and notification service. There are two types of our highly accurate fire detection algorithms; the fire detection model based on YOLOv3, which operates locally, used more than 2000 fire data and learned through data augmentation, and the EfficientDet, which operates in the cloud, has conducted transfer learning on the pretrained model. Four types of notification services were established using AWS service and FCM service; in the case of the web, app, and mail, notifications were received immediately after notification transmission, and in the case of the text messaging system through the base station, the delay time was fast enough within one second. We proved the accuracy of our fire detection technology through fire detection experiments using the fire video, and we also measured the time of fire detection and notification service to check detecting time and notification time. Our AI fire detection and notification service system in this paper is expected to be more accurate and faster than past fire detection systems, which will greatly help secure golden time in the event of fire accidents.

Deep Learning-based Object Detection of Panels Door Open in Underground Utility Tunnel (딥러닝 기반 지하공동구 제어반 문열림 인식)

  • Gyunghwan Kim;Jieun Kim;Woosug Jung
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.665-672
    • /
    • 2023
  • Purpose: Underground utility tunnel is facility that is jointly house infrastructure such as electricity, water and gas in city, causing condensation problems due to lack of airflow. This paper aims to prevent electricity leakage fires caused by condensation by detecting whether the control panel door in the underground utility tunnel is open using a deep learning model. Method: YOLO, a deep learning object recognition model, is trained to recognize the opening and closing of the control panel door using video data taken by a robot patrolling the underground utility tunnel. To improve the recognition rate, image augmentation is used. Result: Among the image enhancement techniques, we compared the performance of the YOLO model trained using mosaic with that of the YOLO model without mosaic, and found that the mosaic technique performed better. The mAP for all classes were 0.994, which is high evaluation result. Conclusion: It was able to detect the control panel even when there were lights off or other objects in the underground cavity. This allows you to effectively manage the underground utility tunnel and prevent disasters.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

Analysis and Forecast of Venture Capital Investment on Generative AI Startups: Focusing on the U.S. and South Korea (생성 AI 스타트업에 대한 벤처투자 분석과 예측: 미국과 한국을 중심으로)

  • Lee, Seungah;Jung, Taehyun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.21-35
    • /
    • 2023
  • Expectations surrounding generative AI technology and its profound ramifications are sweeping across various industrial domains. Given the anticipated pivotal role of the startup ecosystem in the utilization and advancement of generative AI technology, it is imperative to cultivate a deeper comprehension of the present state and distinctive attributes characterizing venture capital (VC) investments within this domain. The current investigation delves into South Korea's landscape of VC investment deals and prognosticates the projected VC investments by juxtaposing these against the United States, the frontrunner in the generative AI industry and its associated ecosystem. For analytical purposes, a compilation of 286 investment deals originating from 117 U.S. generative AI startups spanning the period from 2008 to 2023, as well as 144 investment deals from 42 South Korean generative AI startups covering the years 2011 to 2023, was amassed to construct new datasets. The outcomes of this endeavor reveal an upward trajectory in the count of VC investment deals within both the U.S. and South Korea during recent years. Predominantly, these deals have been concentrated within the early-stage investment realm. Noteworthy disparities between the two nations have also come to light. Specifically, in the U.S., in contrast to South Korea, the quantum of recent VC deals has escalated, marking an augmentation ranging from 285% to 488% in the corresponding developmental stage. While the interval between disparate investment stages demonstrated a slight elongation in South Korea relative to the U.S., this discrepancy did not achieve statistical significance. Furthermore, the proportion of VC investments channeled into generative AI enterprises, relative to the aggregate number of deals, exhibited a higher quotient in South Korea compared to the U.S. Upon a comprehensive sectoral breakdown of generative AI, it was discerned that within the U.S., 59.2% of total deals were concentrated in the text and model sectors, whereas in South Korea, 61.9% of deals centered around the video, image, and chat sectors. Through forecasting, the anticipated VC investments in South Korea from 2023 to 2029 were derived via four distinct models, culminating in an estimated average requirement of 3.4 trillion Korean won (ranging from at least 2.408 trillion won to a maximum of 5.919 trillion won). This research bears pragmatic significance as it methodically dissects VC investments within the generative AI domain across both the U.S. and South Korea, culminating in the presentation of an estimated VC investment projection for the latter. Furthermore, its academic significance lies in laying the groundwork for prospective scholarly inquiries by dissecting the current landscape of generative AI VC investments, a sphere that has hitherto remained void of rigorous academic investigation supported by empirical data. Additionally, the study introduces two innovative methodologies for the prediction of VC investment sums. Upon broader integration, application, and refinement of these methodologies within diverse academic explorations, they stand poised to enhance the prognosticative capacity pertaining to VC investment costs.

  • PDF