• 제목/요약/키워드: Pix2pix

검색결과 59건 처리시간 0.024초

Calculating coniferous tree coverage using unmanned aerial vehicle photogrammetry

  • Ivosevic, Bojana;Han, Yong-Gu;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • 제41권3호
    • /
    • pp.85-92
    • /
    • 2017
  • Unmanned aerial vehicles (UAVs) are a new and yet constantly developing part of forest inventory studies and vegetation-monitoring fields. Covering large areas, their extensive usage has saved time and money for researchers and conservationists to survey vegetation for various data analyses. Post-processing imaging software has improved the effectiveness of UAVs further by providing 3D models for accurate visualization of the data. We focus on determining the coniferous tree coverage to show the current advantages and disadvantages of the orthorectified 2D and 3D models obtained from the image photogrammetry software, Pix4Dmapper Pro-Non-Commercial. We also examine the methodology used for mapping the study site, additionally investigating the spread of coniferous trees. The collected images were transformed into 2D black and white binary pixel images to calculate the coverage area of coniferous trees in the study site using MATLAB. The research was able to conclude that the 3D model was effective in perceiving the tree composition in the designated site, while the orthorectified 2D map is appropriate for the clear differentiation of coniferous and deciduous trees. In its conclusion, the paper will also be able to show how UAVs could be improved for future usability.

Toward accurate synchronic magnetic field maps using solar frontside and AI-generated farside data

  • Jeong, Hyun-Jin;Moon, Yong-Jae;Park, Eunsu
    • 천문학회보
    • /
    • 제46권1호
    • /
    • pp.41.3-42
    • /
    • 2021
  • Conventional global magnetic field maps, such as daily updated synoptic maps, have been constructed by merging together a series of observations from the Earth's viewing direction taken over a 27-day solar rotation period to represent the full surface of the Sun. It has limitations to predict real-time farside magnetic fields, especially for rapid changes in magnetic fields by flux emergence or disappearance. Here, we construct accurate synchronic magnetic field maps using frontside and AI-generated farside data. To generate the farside data, we train and evaluate our deep learning model with frontside SDO observations. We use an improved version of Pix2PixHD with a new objective function and a new configuration of the model input data. We compute correlation coefficients between real magnetograms and AI-generated ones for test data sets. Then we demonstrate that our model better generate magnetic field distributions than before. We compare AI-generated farside data with those predicted by the magnetic flux transport model. Finally, we assimilate our AI-generated farside magnetograms into the flux transport model and show several successive global magnetic field data from our new methodology.

  • PDF

Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol

  • Chena, Lee;Eun-Gyu, Ha;Yoon Joo, Choi;Kug Jin, Jeon;Sang-Sun, Han
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.393-398
    • /
    • 2022
  • Purpose: This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint(TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods: From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results: The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement(ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement(ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion: The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.

레이저스캐닝과 포토그래메트리 소프트웨어 기술을 이용한 조경 수목 3D모델링 재현 특성 비교 (Comparison of Virtual 3D Tree Modelling Using Photogrammetry Software and Laser Scanning Technology)

  • 박재민
    • 한국정보통신학회논문지
    • /
    • 제24권2호
    • /
    • pp.304-310
    • /
    • 2020
  • 본 연구는 레이저스캐닝과 포토그래메트리 소프트웨어를 이용한 3D모델링과 실제 수목 사이의 재현 특성(수형, 질감, 세부 치수)을 비교분석하여 그 활용성을 밝히는데 있다. 연구 방법은 포토그래메트리(Pix4d)와 3D스캐너(Faro S350)를 이용하여 향나무를 3D모델링으로 재현하였다. 연구 결과 3D스캐닝과 포토그래메트리 모두 높은 재현성을 보였다. 특히 원거리에서 UAVs로 촬영한 포토그래메트리에 비해, 3D스캐닝 기술은 수피와 잎의 재현에 있어 매우 우수한 결과를 보였다. 수목의 세부 치수를 비교한 결과, 실제 수목과 3D스캐닝 사이의 오차는 1.7~2.2%로 스캐닝 결과가 실제 수목보다 크게 나타났으며, 실제 수목과 포토그래메트리 사이의 오차는 0.2~0.5%로 포토그래메트리에 의한 모델링이 실제 수목보다 크게 측정되었다. 본 연구는 수목의 가상 3D모델링 구현특성을 살핌으로써, 향후 BIM을 위한 조경수목 DB 구축, 증강현실 연계 조경 설계 및 경관 분석, 노거수의 보전 등의 활용을 위한 기초 연구로서 의의를 가진다.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • 천문학회보
    • /
    • 제46권1호
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

보심건비탕(補心健脾湯) 투여가 Stress 유발 Mouse의 Hypothalamus 유전자 발현에 미치는 영향 (Effects of Boshimgeonbi-tang on Gene Expression in Hypothalamus of Immobilization-stressed Mouse)

  • 이승희;장규태;김장현
    • 동의생리병리학회지
    • /
    • 제19권6호
    • /
    • pp.1585-1593
    • /
    • 2005
  • The genetic effects of restraint stress challenge on HPA axis and the therapeutic effect of Boshimgeonbi-tang on the stress were studied with cDNA microarray analyses, RT-PCR on hypothalamus using an immobilization-stress mice as an animal model. Male CD-1 mice were restrained in a tightly fitted and ventilated vinyl holder for 2hrs once a day, and this challenge was repeated for seven· consecutive days. In the change of body weight it showed that the Boshimgeonbi-tang is effected recovery on weight loss caused by the immobilization-stress. Seven days later, total RNA was extracted from the organs of the mouse, body-labeled with $CyDye^{TM}$ fluorescence dyes and then hybridized to CDNA microarray chip. Scanning and analyzing the array slides were carried out using GenePix4000 series scanner and GenePix $Pro^{TM}$ analyzing program, respectively. The expression profiles of 109 genes out of 6000 genes on the chip were significantly modulated in hypothalamus by the immobilization stress. Energy metabolism-, lipid metabolism-, apoptosis-, stress protein, transcriptional factor, and signal transduction-related genes were transcriptionally activated whereas DNA repair-, protein biosysthesis-, and structure integrity-related genes were down-regulated in hypothalamus. The 58 genes were up-regulated by the mRNA expression folds of 1.5 to 7.9. and the 51 genes were down-regulated by 1.5 - 5.5 fold. The 11 genes among them were selected to confirm the expression profiles by RT-PCR. The mRNA expression levels of Tnfrsf1a (apoptosis), Calm2 (cell cycle), Bag3 (apoptosis), Ogg1 (DNA repair), Aatk (apoptosis), Dffa (apoptosis), Fkbp5 (protein folding) were restored to the normal one by the treatment of Boshimgeonbi-tang.

고정익 무인비행기를 이용한 수계 내 녹조 모니터링 연구 (A Study on Green Algae Monitoring in Watershed Using Fixed Wing UAV)

  • 박정일;최승영;박민호
    • 한국지능시스템학회논문지
    • /
    • 제27권2호
    • /
    • pp.164-169
    • /
    • 2017
  • 본 연구는 지속적으로 수계 내 녹조를 모니터링할 수 있도록 고정익 무인비행기에 멀티스펙트럴 센서를 탑재시켜 금강 유역의 하천을 촬영하고, NDVI 분석을 수행함으로써 수계환경을 효율적으로 관리하는 것을 목적으로 하고 있다. 연구 대상지역은 금강 유역의 백제보 인근이며, 연구수행에 사용된 데이터는 녹조 발생 초기인 2016년 7월에 촬영된 영상이다. 데이터 처리과정으로서, Pix4D 소프트웨어를 이용하여 NDVI 영상을 생성하는 작업을 수행하였다. 생성된 NDVI 영상을 클로로필 실측값과 비교하여 관계식을 도출하고 영상 수치 변환 작업을 수행하였다. 그 결과 실측값이 반영된 클로로필 영상을 추출할 수 있었으며, 앞으로 수계환경관리를 위한 녹조 관측 및 모니터링, 그리고 재해예방 측면에서 무인비행기를 이용한 클로로필 정보 취득은 매우 유용할 것으로 판단된다.

딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성 (True Orthoimage Generation from LiDAR Intensity Using Deep Learning)

  • 신영하;형성웅;이동천
    • 한국측량학회지
    • /
    • 제38권4호
    • /
    • pp.363-373
    • /
    • 2020
  • 정사영상 생성을 위한 많은 연구들이 진행되어 왔다. 기존의 방법은 정사영상을 제작할 경우, 폐색지역을 탐지하고 복원하기 위해 항공영상의 외부표정요소와 정밀 3D 객체 모델링 데이터가 필요하며, 일련의 복잡한 과정을 자동화하는 것은 어렵다. 본 논문에서는 기존의 방법에서 탈피하여 딥러닝(DL)을 이용하여 엄밀정사영상을 제작하는 새로운 방법을 제안하였다. 딥러닝은 여러 분야에서 더욱 급속하게 활용되고 있으며, 최근 생성적 적대 신경망(GAN)은 영상처리 및 컴퓨터비전 분야에서 많은 관심의 대상이다. GAN을 구성하는 생성망은 실제 영상과 유사한 결과가 생성되도록 학습을 수행하고, 판별망은 생성망의 결과가 실제 영상으로 판단될 때까지 반복적으로 수행한다. 본 논문에서 독일 사진측량, 원격탐사 및 공간정보학회(DGPF)가 구축하고 국제 사진측량 및 원격탐사학회(ISPRS)가 제공하는 데이터 셋 중에서 라이다 반사강도 데이터와 적외선 정사영상을 GAN기반의 Pix2Pix 모델 학습에 사용하여 엄밀정사영상을 생성하는 두 가지 방법을 제안하였다. 첫 번째 방법은 라이다 반사강도영상을 입력하고 고해상도의 정사영상을 목적영상으로 사용하여 학습하는 방식이고, 두 번째 방법에서도 입력영상은 첫 번째 방법과 같이 라이다 반사강도영상이지만 목적영상은 라이다 점군집 데이터에 칼라를 지정한 저해상도의 영상을 이용하여 재귀적으로 학습하여 점진적으로 화질을 개선하는 방법이다. 두 가지 방법으로 생성된 정사영상을 FID(Fréchet Inception Distance)를 이용하여 정량적 수치로 비교하면 큰 차이는 없었지만, 입력영상과 목적영상의 품질이 유사할수록, 학습 수행 시 epoch를 증가시키면 우수한 결과를 얻을 수 있었다. 본 논문은 딥러닝으로 엄밀정사영상 생성 가능성을 확인하기 위한 초기단계의 실험적 연구로서 향후 보완 및 개선할 사항을 파악할 수 있었다.

GAN을 이용한 흑백영상과 위성 SAR 영상간의 모의 및 컬러화 (Simulation and Colorization between Gray-scale Images and Satellite SAR Images Using GAN)

  • 조수민;허준혁;어양담
    • 대한토목학회논문집
    • /
    • 제44권1호
    • /
    • pp.125-132
    • /
    • 2024
  • 광학 위성영상은 국가 보안 및 정보 획득을 목적으로 사용되며 그 활용성은 증가하고 있다. 그러나, 기상 조건 및 시간의 제약으로 사용자의 요구에 적합하지 않은 저품질의 영상을 획득하게 된다. 본 논문에서는 광학 위성영상의 구름 폐색영역을 모의하기 위하여 고해상도 SAR 영상을 참조한 딥러닝 기반의 영상변환 및 컬러화 모델을 생성하였다. 해당 모델은 적용 알고리즘 및 입력 데이터 형태에 따라 실험하였으며 생성된 모의영상을 비교 분석하였다. 특히 입력하는 흑백영상과 SAR 영상간의 화소값 정보량이 유사하도록 하여 상대적으로 색상정보량 부족에서 오는 문제점을 개선하였다. 실험 결과, Gray-scale 영상과 고해상도 SAR 영상으로 학습한 모의영상의 히스토그램 분포가 비교적 원 영상과 유사하였고, 정량적인 분석을 위하여 산정한 RMSE 값은 약 6.9827, PSNR 값은 약 31.3960으로 나타났다.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권2호
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.