• Title/Summary/Keyword: 어텐션 유넷

Search Result 3, Processing Time 0.015 seconds

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

Application and Evaluation of the Attention U-Net Using UAV Imagery for Corn Cultivation Field Extraction (무인기 영상 기반 옥수수 재배필지 추출을 위한 Attention U-NET 적용 및 평가)

  • Shin, Hyoung Sub;Song, Seok Ho;Lee, Dong Ho;Park, Jong Hwa
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.253-265
    • /
    • 2021
  • In this study, crop cultivation filed was extracted by using Unmanned Aerial Vehicle (UAV) imagery and deep learning models to overcome the limitations of satellite imagery and to contribute to the technological development of understanding the status of crop cultivation field. The study area was set around Chungbuk Goesan-gun Gammul-myeon Yidam-li and orthogonal images of the area were acquired by using UAV images. In addition, study data for deep learning models was collected by using Farm Map that modified by fieldwork. The Attention U-Net was used as a deep learning model to extract feature of UAV in this study. After the model learning process, the performance evaluation of the model for corn cultivation extraction was performed using non-learning data. We present the model's performance using precision, recall, and F1-score; the metrics show 0.94, 0.96, and 0.92, respectively. This study proved that the method is an effective methodology of extracting corn cultivation field, also presented the potential applicability for other crops.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.33-43
    • /
    • 2022
  • In this paper, we propose an ensemble model facilitated by multi-channel palm images with attention U-Net models and pretrained convolutional neural networks (CNNs) for establishing a contactless palm-based user identification system using conventional inexpensive camera sensors. Attention U-Net models are used to extract the areas of interest including hands (i.e., with fingers), palms (i.e., without fingers) and palm lines, which are combined to generate three channels being ped into the ensemble classifier. Then, the proposed palm information-based user identification system predicts the class using the classifier ensemble with three outperforming pre-trained CNN models. The proposed model demonstrates that the proposed model could achieve the classification accuracy, precision, recall, F1-score of 98.60%, 98.61%, 98.61%, 98.61% respectively, which indicate that the proposed model is effective even though we are using very cheap and inexpensive image sensors. We believe that in this COVID-19 pandemic circumstances, the proposed palm-based contactless user identification system can be an alternative, with high safety and reliability, compared with currently overwhelming contact-based systems.