• Title/Summary/Keyword: VTON

Search Result 7, Processing Time 0.021 seconds

Performance Evaluation of VTON (Virtual-Try-On) Algorithms using a Pair of Cloth and Human Image (이미지를 사용한 가상의상착용 알고리즘들의 성능 분석)

  • Tuan, Thai Thanh;Minar, Matiur Rahman;Ah, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.6
    • /
    • pp.25-34
    • /
    • 2019
  • VTON (Virtual try-on) is a key technology that can activate the online commerce of fashion items. However, the early 3D graphics-based methods require the 3D information of the clothing or the human body, which is difficult to secure realistically. In order to overcome this problem, Image-based deep-learning algorithms such as VITON (Virtual image try-on) and CP-VTON (Characteristic preserving-virtual try-on) has been published, but only a sampled results on performance is presented. In order to examine the strength and weakness for their commercialization, the performance analysis is needed according to the complexity of the clothes, the object posture and body shape, and the degree of occlusion of the clothes. In this paper, IoU and SSIM were evaluated for the performance of transformation and synthesis stages, together with non-DL SCM based method. As a result, CP-VTON shows the best performance, but its performance varies significantly according to posture and complexity of clothes. The reasons for this were attributed to the limitations of secondary geometric deformation and the limitations of the synthesis technology through GAN.

An Improved VTON (Virtual-Try-On) Algorithm using a Pair of Cloth and Human Image (이미지를 사용한 가상의상착용을 위한 개선된 알고리즘)

  • Minar, Matiur Rahman;Tuan, Thai Thanh;Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.11-18
    • /
    • 2020
  • Recently, a series of studies on virtual try-on (VTON) using images have been published. A comparison study analyzed representative methods, SCMM-based non-deep learning method, deep learning based VITON and CP-VITON, using costumes and user images according to the posture and body type of the person, the degree of occlusion of the clothes, and the characteristics of the clothes. In this paper, we tackle the problems observed in the best performing CP-VTON. The issues tackled are the problem of segmentation of the subject, pixel generation of un-intended area, missing warped cloth mask and the cost function used in the learning, and limited the algorithm to improve it. The results show some improvement in SSIM, and significantly in subjective evaluation.

Keypoints-Based 2D Virtual Try-on Network System

  • Pham, Duy Lai;Ngyuen, Nhat Tan;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.186-203
    • /
    • 2020
  • Image-based Virtual Try-On Systems are among the most potential solution for virtual fitting which tries on a target clothes into a model person image and thus have attracted considerable research efforts. In many cases, current solutions for those fails in achieving naturally looking virtual fitted image where a target clothes is transferred into the body area of a model person of any shape and pose while keeping clothes context like texture, text, logo without distortion and artifacts. In this paper, we propose a new improved image-based virtual try-on network system based on keypoints, which we name as KP-VTON. The proposed KP-VTON first detects keypoints in the target clothes and reliably predicts keypoints in the clothes of a model person image by utilizing a dense human pose estimation. Then, through TPS transformation calculated by utilizing the keypoints as control points, the warped target clothes image, which is matched into the body area for wearing the target clothes, is obtained. Finally, a new try-on module adopting Attention U-Net is applied to handle more detailed synthesis of virtual fitted image. Extensive experiments on a well-known dataset show that the proposed KP-VTON performs better the state-of-the-art virtual try-on systems.

Online Virtual Try On using Mannequin Cloth Pictures (마네킨 의상사진 기반 온라인 가상의상착용)

  • Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.6
    • /
    • pp.29-38
    • /
    • 2018
  • In this paper, we developed a virtual cloth try-on (VTON) technology that segement the cloth image worn on the mannequin and applies it to the user 's photograph. The two-dimensional image-based virtual wear study which does not require three-dimensional information of cloth and model is of practical value, but the research result shows that there are limitations of of the current technology for the problem of occlusion or distortion. In this study, we proposed an algorithm to apply the results obtained from the DNN- based segmentation and posture estimation to the user 's photograph, assuming that the mannequin cloth reduces the difficulties in this part. In order to improve the performance compared with the existing one, we used the validity check of the pre-attitude information, the improvement of the deformation using the outline, and the improvement of the divided area. As a result, a significantly improved result image of more than 50% was obtained.

3D Reconstruction of a Single Clothing Image and Its Application to Image-based Virtual Try-On (의상 이미지의 3차원 의상 복원 방법과 가상착용 응용)

  • Ahn, Heejune;Minar, Matiur Rahman
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.1-11
    • /
    • 2020
  • Image-based virtual try-on (VTON) is becoming popular for online apparel shopping, mainly because of not requiring 3D information for try-on clothes and target humans. However, existing 2D algorithms, even when utilizing advanced non-rigid deformation algorithms, cannot handle large spatial transformations for complex target human poses. In this study, we propose a 3D clothing reconstruction method using a 3D human body model. The resulting 3D models of try-on clothes can be more easily deformed when applied to rest posed standard human models. Then, the poses and shapes of 3D clothing models can be transferred to the target human models estimated from 2D images. Finally, the deformed clothing models can be rendered and blended with target human representations. Experimental results with the VITON dataset used in the previous works show that the shapes of reconstructed clothing are significantly more natural, compared to the 2D image-based deformation results when human poses and shapes are estimated accurately.

Virtual Fitting System Using Deep Learning Methodology: HR-VITON Based on Weight Sharing, Mixed Precison & Gradient Accumulation (딥러닝 의류 가상 합성 모델 연구: 가중치 공유 & 학습 최적화 기반 HR-VITON 기법 활용)

  • Lee, Hyun Sang;Oh, Se Hwan;Ha, Sung Ho
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.145-160
    • /
    • 2022
  • Purpose The purpose of this study is to develop a virtual try-on deep learning model that can efficiently learn front and back clothes images. It is expected that the application of virtual try-on clothing service in the fashion and textile industry field will be vitalization. Design/methodology/approach The data used in this study used 232,355 clothes and product images. The image data input to the model is divided into 5 categories: original clothing image and wearer image, clothing segmentation, wearer's body Densepose heatmap, wearer's clothing-agnosting. We advanced the HR-VITON model in the way of Mixed-Precison, Gradient Accumulation, and sharing model weights. Findings As a result of this study, we demonstrated that the weight-shared MP-GA HR-VITON model can efficiently learn front and back fashion images. As a result, this proposed model quantitatively improves the quality of the generated image compared to the existing technique, and natural fitting is possible in both front and back images. SSIM was 0.8385 and 0.9204 in CP-VTON and the proposed model, LPIPS 0.2133 and 0.0642, FID 74.5421 and 11.8463, and KID 0.064 and 0.006. Using the deep learning model of this study, it is possible to naturally fit one color clothes, but when there are complex pictures and logos as shown in <Figure 6>, an unnatural pattern occurred in the generated image. If it is advanced based on the transformer, this problem may also be improved.

Implementation of Secondhand Clothing Trading System with Deep Learning-Based Virtual Fitting Functionality (딥러닝 기반 가상 피팅 기능을 갖는 중고 의류 거래 시스템 구현)

  • Inhwan Jung;Kitae Hwang;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.17-22
    • /
    • 2024
  • This paper introduces the implementation of a secondhand clothing trading system equipped with virtual fitting functionality based on deep learning. The proposed system provides users with the ability to visually try on secondhand clothing items online and assess their fit. To achieve this, it utilizes the Convolutional Neural Network (CNN) algorithm to create virtual representations of users considering their body shape and the design of the clothing. This enables buyers to pre-assess the fit of clothing items online before actually wearing them, thereby aiding in their purchase decisions. Additionally, sellers can present accurate clothing sizes and fits through the system, enhancing customer satisfaction. This paper delves into the CNN model's training process, system implementation, user feedback, and validates the effectiveness of the proposed system through experimental results.