• Title/Summary/Keyword: Conditional GAN

검색결과 36건 처리시간 0.019초

생성적 적대 신경망을 이용한 행성의 장거리 2차원 깊이 광역 위치 추정 방법 (Planetary Long-Range Deep 2D Global Localization Using Generative Adversarial Network)

  • 아하메드 엠.나기브;투안 아인 뉴엔;나임 울 이슬람;김재웅;이석한
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.26-30
    • /
    • 2018
  • Planetary global localization is necessary for long-range rover missions in which communication with command center operator is throttled due to the long distance. There has been number of researches that address this problem by exploiting and matching rover surroundings with global digital elevation maps (DEM). Using conventional methods for matching, however, is challenging due to artifacts in both DEM rendered images, and/or rover 2D images caused by DEM low resolution, rover image illumination variations and small terrain features. In this work, we use train CNN discriminator to match rover 2D image with DEM rendered images using conditional Generative Adversarial Network architecture (cGAN). We then use this discriminator to search an uncertainty bound given by visual odometry (VO) error bound to estimate rover optimal location and orientation. We demonstrate our network capability to learn to translate rover image into DEM simulated image and match them using Devon Island dataset. The experimental results show that our proposed approach achieves ~74% mean average precision.

Generation of contrast enhanced computed tomography image using deep learning network

  • Woo, Sang-Keun
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권3호
    • /
    • pp.41-47
    • /
    • 2019
  • In this paper, we propose a application of conditional generative adversarial network (cGAN) for generation of contrast enhanced computed tomography (CT) image. Two types of CT data which were the enhanced and non-enhanced were used and applied by the histogram equalization for adjusting image intensities. In order to validate the generation of contrast enhanced CT data, the structural similarity index measurement (SSIM) was performed. Prepared generated contrast CT data were analyzed the statistical analysis using paired sample t-test. In order to apply the optimized algorithm for the lymph node cancer, they were calculated by short to long axis ratio (S/L) method. In the case of the model trained with CT data and their histogram equalized SSIM were $0.905{\pm}0.048$ and $0.908{\pm}0.047$. The tumor S/L of generated contrast enhanced CT data were validated similar to the ground truth when they were compared to scanned contrast enhanced CT data. It is expected that advantages of Generated contrast enhanced CT data based on deep learning are a cost-effective and less radiation exposure as well as further anatomical information with non-enhanced CT data.

Solar farside magnetograms from deep learning analysis of STEREO/EUVI data

  • Kim, Taeyoung;Park, Eunsu;Lee, Harim;Moon, Yong-Jae;Bae, Sung-Ho;Lim, Daye;Jang, Soojeong;Kim, Lokwon;Cho, Il-Hyun;Choi, Myungjin;Cho, Kyung-Suk
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.51.3-51.3
    • /
    • 2019
  • Solar magnetograms are important for studying solar activity and predicting space weather disturbances1. Farside magnetograms can be constructed from local helioseismology without any farside data2-4, but their quality is lower than that of typical frontside magnetograms. Here we generate farside solar magnetograms from STEREO/Extreme UltraViolet Imager (EUVI) $304-{\AA}$ images using a deep learning model based on conditional generative adversarial networks (cGANs). We train the model using pairs of Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) $304-{\AA}$ images and SDO/Helioseismic and Magnetic Imager (HMI) magnetograms taken from 2011 to 2017 except for September and October each year. We evaluate the model by comparing pairs of SDO/HMI magnetograms and cGAN-generated magnetograms in September and October. Our method successfully generates frontside solar magnetograms from SDO/AIA $304-{\AA}$ images and these are similar to those of the SDO/HMI, with Hale-patterned active regions being well replicated. Thus we can monitor the temporal evolution of magnetic fields from the farside to the frontside of the Sun using SDO/HMI and farside magnetograms generated by our model when farside extreme-ultraviolet data are available. This study presents an application of image-to-image translation based on cGANs to scientific data.

  • PDF

영상 생성적 데이터 증강을 이용한 딥러닝 기반 SAR 영상 선박 탐지 (Deep-learning based SAR Ship Detection with Generative Data Augmentation)

  • 권형준;정소미;김성태;이재석;손광훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.1-9
    • /
    • 2022
  • Ship detection in synthetic aperture radar (SAR) images is an important application in marine monitoring for the military and civilian domains. Over the past decade, object detection has achieved significant progress with the development of convolutional neural networks (CNNs) and lot of labeled databases. However, due to difficulty in collecting and labeling SAR images, it is still a challenging task to solve SAR ship detection CNNs. To overcome the problem, some methods have employed conventional data augmentation techniques such as flipping, cropping, and affine transformation, but it is insufficient to achieve robust performance to handle a wide variety of types of ships. In this paper, we present a novel and effective approach for deep SAR ship detection, that exploits label-rich Electro-Optical (EO) images. The proposed method consists of two components: a data augmentation network and a ship detection network. First, we train the data augmentation network based on conditional generative adversarial network (cGAN), which aims to generate additional SAR images from EO images. Since it is trained using unpaired EO and SAR images, we impose the cycle-consistency loss to preserve the structural information while translating the characteristics of the images. After training the data augmentation network, we leverage the augmented dataset constituted with real and translated SAR images to train the ship detection network. The experimental results include qualitative evaluation of the translated SAR images and the comparison of detection performance of the networks, trained with non-augmented and augmented dataset, which demonstrates the effectiveness of the proposed framework.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • 천문학회보
    • /
    • 제46권1호
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성 (Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks)

  • 김현호;한석민
    • 인터넷정보학회논문지
    • /
    • 제21권6호
    • /
    • pp.23-31
    • /
    • 2020
  • 본 연구는 철도표면상에 발생하는 노후 현상 중 하나인 결함 검출을 위해 학습데이터를 생성함으로써 결함 검출 모델에서 더 높은 점수를 얻기 위해 진행되었다. 철도표면에서 결함은 선로결속장치 및 선로와 차량의 마찰 등 다양한 원인에 의해 발생하고 선로 파손 등의 사고를 유발할 수 있기 때문에 결함에 대한 철도 유지관리가 필요 하다. 그래서 철도 유지관리의 자동화 및 비용절감을 위해 철도 표면 영상에 영상처리 또는 기계학습을 활용한 결함 검출 및 검사에 대한 다양한 연구가 진행되고 있다. 일반적으로 영상 처리 분석기법 및 기계학습 기술의 성능은 데이터의 수량과 품질에 의존한다. 그렇기 때문에 일부 연구는 일반적이고 다양한 철도표면영상의 데이터베이스를 확보하기위해 등간격으로 선로표면을 촬영하는 장치 또는 탑재된 차량이 필요로 하였다. 본연구는 이러한 기계적인 영상획득 장치의 운용비용을 감소시키고 보완하기 위해 대표적인 영상생성관련 딥러닝 모델인 생성적 적대적 네트워크의 기본 구성에서 여러 관련연구에서 제시된 방법을 응용, 결함이 있는 철도 표면 재생성모델을 구성하여, 전용 데이터베이스가 구축되지 않은 철도 표면 영상에 대해서도 결함 검출을 진행할 수 있도록 하였다. 구성한 모델은 상이한 철도 표면 텍스처들을 반영한 철도 표면 생성을 학습하고 여러 임의의 결함의 위치에 대한 Ground-Truth들을 만족하는 다양한 결함을 재 생성하도록 설계하였다. 재생성된 철도 표면의 영상들을 결함 검출 딥러닝 모델에 학습데이터로 사용한다. 재생성모델의 유효성을 검증하기 위해 철도표면데이터를 3가지의 하위집합으로 군집화 하여 하나의 집합세트를 원본 영상으로 정의하고, 다른 두개의 나머지 하위집합들의 몇가지의 선로표면영상을 텍스처 영상으로 사용하여 새로운 철도 표면 영상을 생성한다. 그리고 결함 검출 모델에서 학습데이터로 생성된 새로운 철도 표면 영상을 사용하였을 때와, 생성된 철도 표면 영상이 없는 원본 영상을 사용하였을 때를 나누어 검증한다. 앞서 분류했던 하위집합들 중에서 원본영상으로 사용된 집합세트를 제외한 두 개의 하위집합들은 각각의 환경에서 학습된 결함 검출 모델에서 검증하여 출력인 픽셀단위 분류지도 영상을 얻는다. 이 픽셀단위 분류지도영상들과 실제 결함의 위치에 대한 원본결함 지도(Ground-Truth)들의 IoU(Intersection over Union) 및 F1-score로 평가하여 성능을 계산하였다. 결과적으로 두개의 하위집합의 텍스처 영상을 이용한 재생성된 학습데이터를 학습한 결함 검출모델의 점수는 원본 영상만을 학습하였을 때의 점수보다 약 IoU 및 F1-score가 10~15% 증가하였다. 이는 전용 학습 데이터가 구축되지 않은 철도표면 영상에 대해서도 기존 데이터를 이용하여 결함 검출이 상당히 가능함을 증명하는 것이다.