• 제목/요약/키워드: Deep Convolutional Generative Adversarial Networks

검색결과 27건 처리시간 0.022초

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구 (Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks)

  • 양훈민
    • 한국군사과학기술학회지
    • /
    • 제22권1호
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

신제품 개발을 위한 GAN 기반 생성모델 성능 비교 (Performance Comparisons of GAN-Based Generative Models for New Product Development)

  • 이동훈;이세훈;강재모
    • 문화기술의 융합
    • /
    • 제8권6호
    • /
    • pp.867-871
    • /
    • 2022
  • 최근 빠른 유행의 변화 속에서 디자인의 변화는 패션기업의 매출에 큰 영향을 미치기 때문에 기업들은 신제품디자인 선택에 신중할 수밖에 없다. 최근 인공지능 분야의 발달에 따라 패션시장에서도 소비자들의 선호도를 높이기 위해 다양한 기계학습을 많이 활용하고 있다. 우리는 선호도와 같은 추상적인 개념을 수치화함으로써 신제품 개발에 신뢰성을 높이는 부분에 기여하고자 한다. 이를 위해 3가지 적대적 생성 신경망(Generative adversial netwrok, GAN)을 통하여 기존에 없는 새로운 이미지를 생성하고, 미리 훈련된 합성곱 신경망(Convolution neural networkm, CNN)을 이용하여 선호도라는 추상적인 개념을 수치화시켜 비교하였다. 심층 컨볼루션 적대적 생성 신경망(Deep convolutional generative adversial netwrok, DCGAN), 점진적 성장 적대적 생성 신경망(Progressive growing generative adversial netwrok, PGGAN), 이중 판별기 적대적 생성 신경망(Dual Discriminator generative adversial netwrok, D2GAN)의 3가지 방법을 통해 새로운 이미지를 생성하였고, 판매량이 높았던 제품으로 훈련된 합성곱 신경망으로 유사도를 비교, 측정하였다. 측정된 유사도의 정도를 선호도로 간주하였으며 실험 결과 D2GAN이 DCGAN, PGGAN에 비해 상대적으로 높은 유사도를 보여주었다.

Imbalanced sample fault diagnosis method for rotating machinery in nuclear power plants based on deep convolutional conditional generative adversarial network

  • Zhichao Wang;Hong Xia;Jiyu Zhang;Bo Yang;Wenzhe Yin
    • Nuclear Engineering and Technology
    • /
    • 제55권6호
    • /
    • pp.2096-2106
    • /
    • 2023
  • Rotating machinery is widely applied in important equipment of nuclear power plants (NPPs), such as pumps and valves. The research on intelligent fault diagnosis of rotating machinery is crucial to ensure the safe operation of related equipment in NPPs. However, in practical applications, data-driven fault diagnosis faces the problem of small and imbalanced samples, resulting in low model training efficiency and poor generalization performance. Therefore, a deep convolutional conditional generative adversarial network (DCCGAN) is constructed to mitigate the impact of imbalanced samples on fault diagnosis. First, a conditional generative adversarial model is designed based on convolutional neural networks to effectively augment imbalanced samples. The original sample features can be effectively extracted by the model based on conditional generative adversarial strategy and appropriate number of filters. In addition, high-quality generated samples are ensured through the visualization of model training process and samples features. Then, a deep convolutional neural network (DCNN) is designed to extract features of mixed samples and implement intelligent fault diagnosis. Finally, based on multi-fault experimental data of motor and bearing, the performance of DCCGAN model for data augmentation and intelligent fault diagnosis is verified. The proposed method effectively alleviates the problem of imbalanced samples, and shows its application value in intelligent fault diagnosis of actual NPPs.

Single Image Dehazing: An Analysis on Generative Adversarial Network

  • Amina Khatun;Mohammad Reduanul Haque;Rabeya Basri;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.136-142
    • /
    • 2024
  • Haze is a very common phenomenon that degrades or reduces the visibility. It causes various problems where high quality images are required such as traffic and security monitoring. So haze removal from images receives great attention for clear vision. Due to its huge impact, significant advances have been achieved but the task yet remains a challenging one. Recently, different types of deep generative adversarial networks (GAN) are applied to suppress the noise and improve the dehazing performance. But it is unclear how these algorithms would perform on hazy images acquired "in the wild" and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and experimental evaluation on diverse GAN models in single image dehazing through benchmark datasets.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권2호
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

딥러닝 기반 손상된 흑백 얼굴 사진 컬러 복원 (Deep Learning based Color Restoration of Corrupted Black and White Facial Photos)

  • 신재우;김종현;이정;송창근;김선정
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제24권2호
    • /
    • pp.1-9
    • /
    • 2018
  • 본 논문에서는 손상된 흑백 얼굴 이미지를 컬러로 복원하는 방법을 제안한다. 기존 연구에서는 오래된 증명사진처럼 손상된 흑백 사진에 컬러화 작업을 하면 손상된 영역 주변이 잘못 색칠되는 경우가 있었다. 이와 같은 문제를 해결하기 위해 본 논문에서는 입력받은 사진의 손상된 영역을 먼저 복원한 후 그 결과를 바탕으로 컬러화를 수행하는 방법을 제안한다. 본 논문의 제안 방법은 BEGAN(Boundary Equivalent Generative Adversarial Networks) 모델 기반 복원과 CNN(Convolutional Neural Network) 기반 컬러화의 두 단계로 구성된다. 제안하는 방법은 이미지 복원을 위해 DCGAN(Deep Convolutional Generative Adversarial Networks) 모델을 사용한 기존 방법들과 달리 좀 더 선명하고 고해상도의 이미지 복원이 가능한 BEGAN 모델을 사용하고, 그 복원된 흑백 이미지를 바탕으로 컬러화 작업을 수행한다. 최종적으로 다양한 유형의 얼굴 이미지와 마스크에 대한 실험 결과를 통해 기존 연구에 비해 많은 경우에 사실적인 컬러 복원 결과를 보여줄 수 있음을 확인하였다.

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.

심층 학습을 활용한 가상 치아 이미지 생성 연구 -학습 횟수를 중심으로 (A Study on Virtual Tooth Image Generation Using Deep Learning - Based on the number of learning)

  • 배은정;정준호;손윤식;임중연
    • 대한치과기공학회지
    • /
    • 제42권1호
    • /
    • pp.1-8
    • /
    • 2020
  • Purpose: Among the virtual teeth generated by Deep Convolutional Generative Adversarial Networks (DCGAN), the optimal data was analyzed for the number of learning. Methods: We extracted 50 mandibular first molar occlusal surfaces and trained 4,000 epoch with DCGAN. The learning screen was saved every 50 times and evaluated on a Likert 5-point scale according to five classification criteria. Results were analyzed by one-way ANOVA and tukey HSD post hoc analysis (α = 0.05). Results: It was the highest with 83.90±6.32 in the number of group3 (2,050-3,000) learning and statistically significant in the group1 (50-1,000) and the group2 (1,050-2,000). Conclusion: Since there is a difference in the optimal virtual tooth generation according to the number of learning, it is necessary to analyze the learning frequency section in various ways.