• Title/Summary/Keyword: Synthetic Dataset

Search Result 107, Processing Time 0.025 seconds

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

Performance Analysis of Deep Learning-Based Detection/Classification for SAR Ground Targets with the Synthetic Dataset (합성 데이터를 이용한 SAR 지상표적의 딥러닝 탐지/분류 성능분석)

  • Ji-Hoon Park
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.147-155
    • /
    • 2024
  • Based on the recently developed deep learning technology, many studies have been conducted on deep learning networks that simultaneously detect and classify targets of interest in synthetic aperture radar(SAR) images. Although numerous research results have been derived mainly with the open SAR ship datasets, there is a lack of work carried out on the deep learning network aimed at detecting and classifying SAR ground targets and trained with the synthetic dataset generated from electromagnetic scattering simulations. In this respect, this paper presents the deep learning network trained with the synthetic dataset and applies it to detecting and classifying real SAR ground targets. With experiment results, this paper also analyzes the network performance according to the composition ratio between the real measured data and the synthetic data involved in network training. Finally, the summary and limitations are discussed to give information on the future research direction.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

A Study on Synthetic Dataset Generation Method for Maritime Traffic Situation Awareness (해상교통 상황인지 향상을 위한 합성 데이터셋 구축방안 연구)

  • Youngchae Lee;Sekil Park
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.6
    • /
    • pp.69-80
    • /
    • 2023
  • Ship collision accidents not only cause loss of life and property damage, but also cause marine pollution and can become national disasters, so prevention is very important. Most of these ship collision accidents are caused by human factors due to the navigation officer's lack of vigilance and carelessness, and in many cases, they can be prevented through the support of a system that helps with situation awareness. Recently, artificial intelligence has been used to develop systems that help navigators recognize the situation, but the sea is very wide and deep, so it is difficult to secure maritime traffic datasets, which also makes it difficult to develop artificial intelligence models. In this paper, to solve these difficulties, we propose a method to build a dataset with characteristics similar to actual maritime traffic datasets. The proposed method uses segmentation and inpainting technologies to build a foreground and background dataset, and then applies compositing technology to create a synthetic dataset. Through prototype implementation and result analysis of the proposed method, it was confirmed that the proposed method is effective in overcoming the difficulties of dataset construction and complementing various scenes similar to reality.

Human Detection using Real-virtual Augmented Dataset

  • Jongmin, Lee;Yongwan, Kim;Jinsung, Choi;Ki-Hong, Kim;Daehwan, Kim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.98-102
    • /
    • 2023
  • This paper presents a study on how augmenting semi-synthetic image data improves the performance of human detection algorithms. In the field of object detection, securing a high-quality data set plays the most important role in training deep learning algorithms. Recently, the acquisition of real image data has become time consuming and expensive; therefore, research using synthesized data has been conducted. Synthetic data haves the advantage of being able to generate a vast amount of data and accurately label it. However, the utility of synthetic data in human detection has not yet been demonstrated. Therefore, we use You Only Look Once (YOLO), the object detection algorithm most commonly used, to experimentally analyze the effect of synthetic data augmentation on human detection performance. As a result of training YOLO using the Penn-Fudan dataset, it was shown that the YOLO network model trained on a dataset augmented with synthetic data provided high-performance results in terms of the Precision-Recall Curve and F1-Confidence Curve.

Synthetic Infra-Red Image Dataset Generation by CycleGAN based on SSIM Loss Function (SSIM 목적 함수와 CycleGAN을 이용한 적외선 이미지 데이터셋 생성 기법 연구)

  • Lee, Sky;Leeghim, Henzeh
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.5
    • /
    • pp.476-486
    • /
    • 2022
  • Synthetic dynamic infrared image generation from the given virtual environment is being the primary goal to simulate the output of the infra-red(IR) camera installed on a vehicle to evaluate the control algorithm for various search & reconnaissance missions. Due to the difficulty to obtain actual IR data in complex environments, Artificial intelligence(AI) has been used recently in the field of image data generation. In this paper, CycleGAN technique is applied to obtain a more realistic synthetic IR image. We added the Structural Similarity Index Measure(SSIM) loss function to the L1 loss function to generate a more realistic synthetic IR image when the CycleGAN image is generated. From the simulation, it is applicable to the guided-missile flight simulation tests by using the synthetic infrared image generated by the proposed technique.

Machine learning of LWR spent nuclear fuel assembly decay heat measurements

  • Ebiwonjumi, Bamidele;Cherezov, Alexey;Dzianisau, Siarhei;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3563-3579
    • /
    • 2021
  • Measured decay heat data of light water reactor (LWR) spent nuclear fuel (SNF) assemblies are adopted to train machine learning (ML) models. The measured data is available for fuel assemblies irradiated in commercial reactors operated in the United States and Sweden. The data comes from calorimetric measurements of discharged pressurized water reactor (PWR) and boiling water reactor (BWR) fuel assemblies. 91 and 171 measurements of PWR and BWR assembly decay heat data are used, respectively. Due to the small size of the measurement dataset, we propose: (i) to use the method of multiple runs (ii) to generate and use synthetic data, as large dataset which has similar statistical characteristics as the original dataset. Three ML models are developed based on Gaussian process (GP), support vector machines (SVM) and neural networks (NN), with four inputs including the fuel assembly averaged enrichment, assembly averaged burnup, initial heavy metal mass, and cooling time after discharge. The outcomes of this work are (i) development of ML models which predict LWR fuel assembly decay heat from the four inputs (ii) generation and application of synthetic data which improves the performance of the ML models (iii) uncertainty analysis of the ML models and their predictions.

Synthetic Image Generation for Military Vehicle Detection (군용물체탐지 연구를 위한 가상 이미지 데이터 생성)

  • Se-Yoon Oh;Hunmin Yang
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.5
    • /
    • pp.392-399
    • /
    • 2023
  • This research paper investigates the effectiveness of using computer graphics(CG) based synthetic data for deep learning in military vehicle detection. In particular, we explore the use of synthetic image generation techniques to train deep neural networks for object detection tasks. Our approach involves the generation of a large dataset of synthetic images of military vehicles, which is then used to train a deep learning model. The resulting model is then evaluated on real-world images to measure its effectiveness. Our experimental results show that synthetic training data alone can achieve effective results in object detection. Our findings demonstrate the potential of CG-based synthetic data for deep learning and suggest its value as a tool for training models in a variety of applications, including military vehicle detection.

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Photorealistic Real-Time Dense 3D Mesh Mapping for AUV (자율 수중 로봇을 위한 사실적인 실시간 고밀도 3차원 Mesh 지도 작성)

  • Jungwoo Lee;Younggun Cho
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.188-195
    • /
    • 2024
  • This paper proposes a photorealistic real-time dense 3D mapping system that utilizes a neural network-based image enhancement method and mesh-based map representation. Due to the characteristics of the underwater environment, where problems such as hazing and low contrast occur, it is hard to apply conventional simultaneous localization and mapping (SLAM) methods. At the same time, the behavior of Autonomous Underwater Vehicle (AUV) is computationally constrained. In this paper, we utilize a neural network-based image enhancement method to improve pose estimation and mapping quality and apply a sliding window-based mesh expansion method to enable lightweight, fast, and photorealistic mapping. To validate our results, we utilize real-world and indoor synthetic datasets. We performed qualitative validation with the real-world dataset and quantitative validation by modeling images from the indoor synthetic dataset as underwater scenes.