• Title/Summary/Keyword: Skinned Multi-Person Linear (SMPL) model

Search Result 2, Processing Time 0.02 seconds

K-SMPL: Korean Body Measurement Data Based Parametric Human Model (K-SMPL: 한국인 체형 데이터 기반의 매개화된 인체 모델)

  • Choi, Byeoli;Lee, Sung-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.4
    • /
    • pp.1-11
    • /
    • 2022
  • The Skinned Multi-Person Linear Model (SMPL) is the most widely used parametric 3D Human Model optimized and learned from CAESAR, a 3D human scanned database created with measurements from 3,800 people living in United States in the 1990s. We point out the lack of racial diversity of body types in SMPL and propose K-SMPL that better represents Korean 3D body shapes. To this end, we develop a fitting algorithm to estimate 2,773 Korean 3D body shapes from Korean body measurement data. By conducting principle component analysis to the estimated Korean body shapes, we construct K-SMPL model that can generate various Korean body shape in 3D. K-SMPL model allows to improve the fitting accuracy over SMPL with respect to the Korean body measurement data. K-SMPL model can be widely used for avatar generation and human shape fitting for Korean.

3D Reconstruction of a Single Clothing Image and Its Application to Image-based Virtual Try-On (의상 이미지의 3차원 의상 복원 방법과 가상착용 응용)

  • Ahn, Heejune;Minar, Matiur Rahman
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.1-11
    • /
    • 2020
  • Image-based virtual try-on (VTON) is becoming popular for online apparel shopping, mainly because of not requiring 3D information for try-on clothes and target humans. However, existing 2D algorithms, even when utilizing advanced non-rigid deformation algorithms, cannot handle large spatial transformations for complex target human poses. In this study, we propose a 3D clothing reconstruction method using a 3D human body model. The resulting 3D models of try-on clothes can be more easily deformed when applied to rest posed standard human models. Then, the poses and shapes of 3D clothing models can be transferred to the target human models estimated from 2D images. Finally, the deformed clothing models can be rendered and blended with target human representations. Experimental results with the VITON dataset used in the previous works show that the shapes of reconstructed clothing are significantly more natural, compared to the 2D image-based deformation results when human poses and shapes are estimated accurately.