• Title/Summary/Keyword: Adversarial Networks

Search Result 214, Processing Time 0.028 seconds

Discriminative Manifold Learning Network using Adversarial Examples for Image Classification

  • Zhang, Yuan;Shi, Biming
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2099-2106
    • /
    • 2018
  • This study presents a novel approach of discriminative feature vectors based on manifold learning using nonlinear dimension reduction (DR) technique to improve loss function, and combine with the Adversarial examples to regularize the object function for image classification. The traditional convolutional neural networks (CNN) with many new regularization approach has been successfully used for image classification tasks, and it achieved good results, hence it costs a lot of Calculated spacing and timing. Significantly, distrinct from traditional CNN, we discriminate the feature vectors for objects without empirically-tuned parameter, these Discriminative features intend to remain the lower-dimensional relationship corresponding high-dimension manifold after projecting the image feature vectors from high-dimension to lower-dimension, and we optimize the constrains of the preserving local features based on manifold, which narrow the mapped feature information from the same class and push different class away. Using Adversarial examples, improved loss function with additional regularization term intends to boost the Robustness and generalization of neural network. experimental results indicate that the approach based on discriminative feature of manifold learning is not only valid, but also more efficient in image classification tasks. Furthermore, the proposed approach achieves competitive classification performances for three benchmark datasets : MNIST, CIFAR-10, SVHN.

FD-StackGAN: Face De-occlusion Using Stacked Generative Adversarial Networks

  • Jabbar, Abdul;Li, Xi;Iqbal, M. Munawwar;Malik, Arif Jamal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2547-2567
    • /
    • 2021
  • It has been widely acknowledged that occlusion impairments adversely distress many face recognition algorithms' performance. Therefore, it is crucial to solving the problem of face image occlusion in face recognition. To solve the image occlusion problem in face recognition, this paper aims to automatically de-occlude the human face majority or discriminative regions to improve face recognition performance. To achieve this, we decompose the generative process into two key stages and employ a separate generative adversarial network (GAN)-based network in both stages. The first stage generates an initial coarse face image without an occlusion mask. The second stage refines the result from the first stage by forcing it closer to real face images or ground truth. To increase the performance and minimize the artifacts in the generated result, a new refine loss (e.g., reconstruction loss, perceptual loss, and adversarial loss) is used to determine all differences between the generated de-occluded face image and ground truth. Furthermore, we build occluded face images and corresponding occlusion-free face images dataset. We trained our model on this new dataset and later tested it on real-world face images. The experiment results (qualitative and quantitative) and the comparative study confirm the robustness and effectiveness of the proposed work in removing challenging occlusion masks with various structures, sizes, shapes, types, and positions.

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.

Morpho-GAN: Unsupervised Learning of Data with High Morphology using Generative Adversarial Networks (Morpho-GAN: Generative Adversarial Networks를 사용하여 높은 형태론 데이터에 대한 비지도학습)

  • Abduazimov, Azamat;Jo, GeunSik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.11-14
    • /
    • 2020
  • The importance of data in the development of deep learning is very high. Data with high morphological features are usually utilized in the domains where careful lens calibrations are needed by a human to capture those data. Synthesis of high morphological data for that domain can be a great asset to improve the classification accuracy of systems in the field. Unsupervised learning can be employed for this task. Generating photo-realistic objects of interest has been massively studied after Generative Adversarial Network (GAN) was introduced. In this paper, we propose Morpho-GAN, a method that unifies several GAN techniques to generate quality data of high morphology. Our method introduces a new suitable training objective in the discriminator of GAN to synthesize images that follow the distribution of the original dataset. The results demonstrate that the proposed method can generate plausible data as good as other modern baseline models while taking a less complex during training.

  • PDF

Automaitc Generation of Fashion Image Dataset by Using Progressive Growing GAN (PG-GAN을 이용한 패션이미지 데이터 자동 생성)

  • Kim, Yanghee;Lee, Chanhee;Whang, Taesun;Kim, Gyeongmin;Lim, Heuiseok
    • Journal of Internet of Things and Convergence
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2018
  • Techniques for generating new sample data from higher dimensional data such as images have been utilized variously for speech synthesis, image conversion and image restoration. This paper adopts Progressive Growing of Generative Adversarial Networks(PG-GANs) as an implementation model to generate high-resolution images and to enhance variation of the generated images, and applied it to fashion image data. PG-GANs allows the generator and discriminator to progressively learn at the same time, continuously adding new layers from low-resolution images to result high-resolution images. We also proposed a Mini-batch Discrimination method to increase the diversity of generated data, and proposed a Sliced Wasserstein Distance(SWD) evaluation method instead of the existing MS-SSIM to evaluate the GAN model.

A Study on the Synthetic ECG Generation for User Recognition (사용자 인식을 위한 가상 심전도 신호 생성 기술에 관한 연구)

  • Kim, Min Gu;Kim, Jin Su;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.33-37
    • /
    • 2019
  • Because the ECG signals are time-series data acquired as time elapses, it is important to obtain comparative data the same in size as the enrolled data every time. This paper suggests a network model of GAN (Generative Adversarial Networks) based on an auxiliary classifier to generate synthetic ECG signals which may address the different data size issues. The Cosine similarity and Cross-correlation are used to examine the similarity of synthetic ECG signals. The analysis shows that the Average Cosine similarity was 0.991 and the Average Euclidean distance similarity based on cross-correlation was 0.25: such results indicate that data size difference issue can be resolved while the generated synthetic ECG signals, similar to real ECG signals, can create synthetic data even when the registered data are not the same as the comparative data in size.

Improving Fidelity of Synthesized Voices Generated by Using GANs (GAN으로 합성한 음성의 충실도 향상)

  • Back, Moon-Ki;Yoon, Seung-Won;Lee, Sang-Baek;Lee, Kyu-Chul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.9-18
    • /
    • 2021
  • Although Generative Adversarial Networks (GANs) have gained great popularity in computer vision and related fields, generating audio signals independently has yet to be presented. Unlike images, an audio signal is a sampled signal consisting of discrete samples, so it is not easy to learn the signals using CNN architectures, which is widely used in image generation tasks. In order to overcome this difficulty, GAN researchers proposed a strategy of applying time-frequency representations of audio to existing image-generating GANs. Following this strategy, we propose an improved method for increasing the fidelity of synthesized audio signals generated by using GANs. Our method is demonstrated on a public speech dataset, and evaluated by Fréchet Inception Distance (FID). When employing our method, the FID showed 10.504, but 11.973 as for the existing state of the art method (lower FID indicates better fidelity).

Style-Generative Adversarial Networks for Data Augmentation of Human Images at Homecare Environments (조호환경 내 사람 이미지 데이터 증강을 위한 Style-Generative Adversarial Networks 기법)

  • Park, Changjoon;Kim, Beomjun;Kim, Inki;Gwak, Jeonghwan
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.565-567
    • /
    • 2022
  • 질병을 앓고 있는 환자는 상태에 따라 병실, 주거지, 요양원 등 조호환경 내 생활 시 의료 인력의 지속적인 추적 및 관찰을 통해 신체에 이상이 생긴 경우 이를 감지하고, 신속하게 조치할 수 있도록 해야 한다. 의료 인력이 직접 환자를 확인하는 방법은 의료 인력의 반복적인 노동이 요구되며 실시간으로 환자를 확인해야 한다는 특성상 의료 인력이 상주해야 하기에 이는 곧, 의료 인력의 부족과 낭비로 이어진다. 해당 문제 해결을 위해 의료 인력을 대신하여 조호환경 내 환자의 상태를 실시간으로 모니터링할 수 있는 딥러닝 모델들이 연구되고 있다. 딥러닝 모델은 데이터의 수가 많을수록 강인한 모델을 설계할 수 있으며, 데이터셋의 배경, 객체의 특징 분포 등 다양한 조건에 영향을 받기 때문에 학습에 필요한 도메인을 가지는 많은 양의 전처리된 데이터를 수집해야 한다. 따라서, 조호환경 내 환자에 대한 데이터셋이 필요하지만, 공개된 데이터셋의 경우 양이 매우 적으며 이를 반전, 회전기법 등을이용할 경우 데이터의 수를 늘릴 수 있지만, 같은 분포의 특징을 가지는 데이터가 생성되기에 데이터 증강 기법을 단순하게 적용하면 딥러닝 모델의 과적합을 야기한다. 또한, 조호환경 내 이미지 데이터셋은 얼굴 노출과 같은 개인정보가 포함 될 수 있으며 이를 보호하기 위해 정보들을 비식별화 해야 한다는 문제점이 있다. 따라서 본 논문에서는 조호환경에서 수집된 데이터 증강을 위한 Style-Generative Adversarial Networks 기법을 적용하여 조호환경 데이터셋 수집에 효과적인 증강 기법을 제안한다.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.

Non-pneumatic Tire Design System based on Generative Adversarial Networks (적대적 생성 신경망 기반 비공기압 타이어 디자인 시스템)

  • JuYong Seong;Hyunjun Lee;Sungchul Lee
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.34-46
    • /
    • 2023
  • The design of non-pneumatic tires, which are created by filling the space between the wheel and the tread with elastomeric compounds or polygonal spokes, has become an important research topic in the automotive and aerospace industries. In this study, a system was designed for the design of non-pneumatic tires through the implementation of a generative adversarial network. We specifically examined factors that could impact the design, including the type of non-pneumatic tire, its intended usage environment, manufacturing techniques, distinctions from pneumatic tires, and how spoke design affects load distribution. Using OpenCV, various shapes and spoke configurations were generated as images, and a GAN model was trained on the projected GANs to generate shapes and spokes for non-pneumatic tire designs. The designed non-pneumatic tires were labeled as available or not, and a Vision Transformer image classification AI model was trained on these labels for classification purposes. Evaluation of the classification model show convergence to a near-zero loss and a 99% accuracy rate confirming the generation of non-pneumatic tire designs.

  • PDF