• Title/Summary/Keyword: Deep Image Prior

Search Result 31, Processing Time 0.019 seconds

BM3D and Deep Image Prior based Denoising for the Defense against Adversarial Attacks on Malware Detection Networks

  • Sandra, Kumi;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.163-171
    • /
    • 2021
  • Recently, Machine Learning-based visualization approaches have been proposed to combat the problem of malware detection. Unfortunately, these techniques are exposed to Adversarial examples. Adversarial examples are noises which can deceive the deep learning based malware detection network such that the malware becomes unrecognizable. To address the shortcomings of these approaches, we present Block-matching and 3D filtering (BM3D) algorithm and deep image prior based denoising technique to defend against adversarial examples on visualization-based malware detection systems. The BM3D based denoising method eliminates most of the adversarial noise. After that the deep image prior based denoising removes the remaining subtle noise. Experimental results on the MS BIG malware dataset and benign samples show that the proposed denoising based defense recovers the performance of the adversarial attacked CNN model for malware detection to some extent.

Joint Demosaicing and Super-resolution of Color Filter Array Image based on Deep Image Prior Network

  • Kurniawan, Edwin;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.13-21
    • /
    • 2022
  • In this paper, we propose a learning based joint demosaicing and super-resolution framework which uses only the mosaiced color filter array(CFA) image as the input. As the proposed method works only on the mosaicied CFA image itself, there is no need for a large dataset. Based on our framework, we proposed two different structures, where the first structure uses one deep image prior network, while the second uses two. Experimental results show that even though we use only the CFA image as the training image, the proposed method can result in better visual quality than other bilinear interpolation combined demosaicing methods, and therefore, opens up a new research area for joint demosaicing and super-resolution on raw images.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

Image Deblurring Based on ADMM and Deep CNN Denoiser Image Prior (ADMM과 깊은 합성곱 신경망 잡음 제거기 이미지 Prior에 기반한 이미지 디블러링)

  • Kwon, Junhyeong;Soh, Jae Woong;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.680-683
    • /
    • 2020
  • 오래 전부터 모델 기반 최적화 방법이 이미지 디블러링을 위해 널리 사용되어 왔고, 최근에는 학습 기반 기술이 영상 디블러링에서 좋은 성과를 보이고 있다. 본 논문은 ADMM과 깊은 합성곱 신경망 잡음 제거기 이미지 prior를 이용하여 모델 기반 최적화 방법의 장점과 학습 기반 방법의 장점을 모두 활용할 수 있는 방법을 제안한다. 본 방법을 이용하여 기존 방법보다 더 좋은 디블러링 성능을 얻을 수 있었다.

  • PDF

A Novel RFID Dynamic Testing Method Based on Optical Measurement

  • Zhenlu Liu;Xiaolei Yu;Lin Li;Weichun Zhang;Xiao Zhuang;Zhimin Zhao
    • Current Optics and Photonics
    • /
    • v.8 no.2
    • /
    • pp.127-137
    • /
    • 2024
  • The distribution of tags is an important factor that affects the performance of radio-frequency identification (RFID). To study RFID performance, it is necessary to obtain RFID tags' coordinates. However, the positioning method of RFID technology has large errors, and is easily affected by the environment. Therefore, a new method using optical measurement is proposed to achieve RFID performance analysis. First, due to the possibility of blurring during image acquisition, the paper derives a new image prior to removing blurring. A nonlocal means-based method for image deconvolution is proposed. Experimental results show that the PSNR and SSIM indicators of our algorithm are better than those of a learning deep convolutional neural network and fast total variation. Second, an RFID dynamic testing system based on photoelectric sensing technology is designed. The reading distance of RFID and the three-dimensional coordinates of the tags are obtained. Finally, deep learning is used to model the RFID reading distance and tag distribution. The error is 3.02%, which is better than other algorithms such as a particle-swarm optimization back-propagation neural network, an extreme learning machine, and a deep neural network. The paper proposes the use of optical methods to measure and collect RFID data, and to analyze and predict RFID performance. This provides a new method for testing RFID performance.

Image Quality and Lesion Detectability of Lower-Dose Abdominopelvic CT Obtained Using Deep Learning Image Reconstruction

  • June Park;Jaeseung Shin;In Kyung Min;Heejin Bae;Yeo-Eun Kim;Yong Eun Chung
    • Korean Journal of Radiology
    • /
    • v.23 no.4
    • /
    • pp.402-412
    • /
    • 2022
  • Objective: To evaluate the image quality and lesion detectability of lower-dose CT (LDCT) of the abdomen and pelvis obtained using a deep learning image reconstruction (DLIR) algorithm compared with those of standard-dose CT (SDCT) images. Materials and Methods: This retrospective study included 123 patients (mean age ± standard deviation, 63 ± 11 years; male:female, 70:53) who underwent contrast-enhanced abdominopelvic LDCT between May and August 2020 and had prior SDCT obtained using the same CT scanner within a year. LDCT images were reconstructed with hybrid iterative reconstruction (h-IR) and DLIR at medium and high strengths (DLIR-M and DLIR-H), while SDCT images were reconstructed with h-IR. For quantitative image quality analysis, image noise, signal-to-noise ratio, and contrast-to-noise ratio were measured in the liver, muscle, and aorta. Among the three different LDCT reconstruction algorithms, the one showing the smallest difference in quantitative parameters from those of SDCT images was selected for qualitative image quality analysis and lesion detectability evaluation. For qualitative analysis, overall image quality, image noise, image sharpness, image texture, and lesion conspicuity were graded using a 5-point scale by two radiologists. Observer performance in focal liver lesion detection was evaluated by comparing the jackknife free-response receiver operating characteristic figures-of-merit (FOM). Results: LDCT (35.1% dose reduction compared with SDCT) images obtained using DLIR-M showed similar quantitative measures to those of SDCT with h-IR images. All qualitative parameters of LDCT with DLIR-M images but image texture were similar to or significantly better than those of SDCT with h-IR images. The lesion detectability on LDCT with DLIR-M images was not significantly different from that of SDCT with h-IR images (reader-averaged FOM, 0.887 vs. 0.874, respectively; p = 0.581). Conclusion: Overall image quality and detectability of focal liver lesions is preserved in contrast-enhanced abdominopelvic LDCT obtained with DLIR-M relative to those in SDCT with h-IR.

Deep CNN based Pilot Allocation Scheme in Massive MIMO systems

  • Kim, Kwihoon;Lee, Joohyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4214-4230
    • /
    • 2020
  • This paper introduces a pilot allocation scheme for massive MIMO systems based on deep convolutional neural network (CNN) learning. This work is an extension of a prior work on the basic deep learning framework of the pilot assignment problem, the application of which to a high-user density nature is difficult owing to the factorial increase in both input features and output layers. To solve this problem, by adopting the advantages of CNN in learning image data, we design input features that represent users' locations in all the cells as image data with a two-dimensional fixed-size matrix. Furthermore, using a sorting mechanism for applying proper rule, we construct output layers with a linear space complexity according to the number of users. We also develop a theoretical framework for the network capacity model of the massive MIMO systems and apply it to the training process. Finally, we implement the proposed deep CNN-based pilot assignment scheme using a commercial vanilla CNN, which takes into account shift invariant characteristics. Through extensive simulation, we demonstrate that the proposed work realizes about a 98% theoretical upper-bound performance and an elapsed time of 0.842 ms with low complexity in the case of a high-user-density condition.

Segmentation of Bacterial Cells Based on a Hybrid Feature Generation and Deep Learning (하이브리드 피처 생성 및 딥 러닝 기반 박테리아 세포의 세분화)

  • Lim, Seon-Ja;Vununu, Caleb;Kwon, Ki-Ryong;Youn, Sung-Dae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.965-976
    • /
    • 2020
  • We present in this work a segmentation method of E. coli bacterial images generated via phase contrast microscopy using a deep learning based hybrid feature generation. Unlike conventional machine learning methods that use the hand-crafted features, we adopt the denoising autoencoder in order to generate a precise and accurate representation of the pixels. We first construct a hybrid vector that combines original image, difference of Gaussians and image gradients. The created hybrid features are then given to a deep autoencoder that learns the pixels' internal dependencies and the cells' shape and boundary information. The latent representations learned by the autoencoder are used as the inputs of a softmax classification layer and the direct outputs from the classifier represent the coarse segmentation mask. Finally, the classifier's outputs are used as prior information for a graph partitioning based fine segmentation. We demonstrate that the proposed hybrid vector representation manages to preserve the global shape and boundary information of the cells, allowing to retrieve the majority of the cellular patterns without the need of any post-processing.