• 제목/요약/키워드: 3D Expression Method

Search Result 299, Processing Time 0.026 seconds

A Study on Reproductions of North American Smocking Design Using a 3D Virtual Clothing System (3차원 가상착의 시스템을 이용한 북아메리칸 스모킹 디자인 재현 연구)

  • Kim, Minkyoung
    • Journal of Fashion Business
    • /
    • v.24 no.5
    • /
    • pp.106-124
    • /
    • 2020
  • The purpose of this study was to analyze the three-dimensional (3D) characteristics and reproducibility of the effective expression of North American smocking pleats in the process of making clothes using a 3D virtual clothing system (CLO) and present a method of expression according to the types of North American smocking. In this study, lattice, lozenge, and flower smocking were produced as real smocking and 3D virtual content, and actual muslin properties were measured using a Fabric Kit and reflected using an emulator. The results of this study confirmed that a dense puckered design such as North American smocking could be expressed depending upon the internal line, fold angle, and reinforcement setting for 3D smocking. To partially apply pleats to flat fabrics, it was necessary to set fold lines. The fold line setting could be expressed by designing the internal line in horizontal, vertical, and diagonal directions according to the North American smocking design, and then setting the fold angle for each internal line. By setting fold angles of 0 degrees and 360 degrees according to the folding direction of the set internal line, the fabric was clearly folded and stable pleats were created. This study will contribute to the vitalization of the 3D virtual fashion content industry by analyzing and presenting the optimal expression method of sophisticated and complex pleats generated according to the North American smocking design pattern.

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

Realistic individual 3D face modeling (사실적인 3D 얼굴 모델링 시스템)

  • Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1187-1193
    • /
    • 2013
  • In this paper, we present realistic 3D head modeling and facial expression systems. For 3D head modeling, we perform generic model fitting to make individual head shape and texture mapping. To calculate the deformation function in the generic model fitting, we determine correspondence between individual heads and the generic model. Then, we reconstruct the feature points to 3D with simultaneously captured images from calibrated stereo camera. For texture mapping, we project the fitted generic model to image and map the texture in the predefined triangle mesh to generic model. To prevent extracting the wrong texture, we propose a simple method using a modified interpolation function. For generating 3D facial expression, we use the vector muscle based algorithm. For more realistic facial expression, we add the deformation of the skin according to the jaw rotation to basic vector muscle model and apply mass spring model. Finally, several 3D facial expression results are shown at the end of the paper.

3D Expression of Outdoor Railway Noise : NIC@E (철도 환경 소음의 3-D 표현: NIC@E)

  • 김준연;김정태
    • Proceedings of the KSR Conference
    • /
    • 2000.05a
    • /
    • pp.521-528
    • /
    • 2000
  • NIC@E is software for prediction of various outdoor Noise. The program is based on the ray tracing technique which has been widely used in an environmental noise prediction and analysis. In this paper, we analyze the Railway noise on the various types of geometrical source conditions in 3D and develope tile expression method of 3D Graphics for noise level.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

The Study for Improvement of False Contour in the Plasma Display Panel (플라즈마 디스플레이 패널의 의사윤곽 개선에 관한 연구)

  • Shin, Jae-Hwa;Ha, Sung-Chul;Lee, Seok-Hyun
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.52 no.3
    • /
    • pp.113-120
    • /
    • 2003
  • Plasma display panels normally utilize the binary coded light emission scheme for gray scale expression. Subsequently, this expression method makes dynamic false contours. We propose the "E3DSM(enhanced 3-dimension scattering method)" that improved existing 3-d scattering method and the "HAM(histogram analysis method)" that is decided the driving schemes and subfield selections with histograms of images. Simulation results show the improving image quality.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Research on the Expression Features of Naked-eye 3D Effect of LED Screen Based on Optical Illusion Art

  • Fu, Linwei;Zhou, Jiani;Tae Soo, Yun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.126-139
    • /
    • 2023
  • At present, naked-eye 3D appears more and more commonly on the facades of urban buildings. It brings an incredible visual experience to the audience by simulating the natural 3D 3D space effect. At the same time, it also creates enormous commercial value for city publicity and commercial advertisements. There is much research on naked-eye 3D visual effects, but for right-angle LED screens. Right-angle LED screen's brand-new expression method that has only become popular in recent years, how to convey a realistic naked-eye 3D effect through two LED screens combined at right angles has become a problem worth exploring. To explore the whole design ideas and production process of the naked-eye 3D impact of the right-angle LED screen, this paper is a preliminary study aimed at understanding the performance principle and expression features. Before the case analysis, first, understand the standard virtual 3D space construction techniques. Combining it with the optical illusion phenomenon, according to the expression principle of the naked-eye 3D effect of the right-angle LED screen, it can be summarized into seven expressions: Shadow, Color contrast, Background structure line, Magnify object, Object out of bounds, Object floating, Fusion of picture and background. By analyzing the optical illusion phenomenon used in the case, we summarized the main performance characteristics of the naked eye 3D effect. The emergence of right-angle LED screens breaks the limitation of a single plane of optical illusion art, perfectly combines building facades with naked-eye 3D visual effects, and provides designers with a brand-new creative platform. Understanding its production principles and main expressive features can help designers enter this innovative platform better.

A Study on the producing of Non-realistic 3D Character Animation with the style of 2D Animation (비사실적 3D 캐릭터 애니메이션의 효과적인 2D 애니메이션 스타일 연출 연구)

  • Kim, Sungrae
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.894-898
    • /
    • 2007
  • Now a day, lots of the animations include TV series Animations are made by the technique of 3D Animation. However, 3D Animation obstructs visual elements and deteriorates an acquaintance with the limit of unfamiliar material methods and dispersion of light. For this reason, a large of number of 3D Animations are repacked with the style of 2D Animation. Most of previous studies for the conversion of output 3D Animation to the style of 2D Animation are analysis for 2D rendering techniques. In case of Non-realistic 3D Character Animation, first and foremost it needs investigation of the basic producing method for the 2D Animation is different with the realistic expression way of the one for the 3D animation. For a case study, expression methods for the non-realistic and the non-actuality 2D Character Animation come with impossible ways in the real life. This study for the 3D Animation with the style of 2D Animation is to investigate on the keynote for effective expression methods, when we turn 3D Animation into the style of 2D Animation.

  • PDF

Soft Sign Language Expression Method of 3D Avatar (3D 아바타의 자연스러운 수화 동작 표현 방법)

  • Oh, Young-Joon;Jang, Hyo-Young;Jung, Jin-Woo;Park, Kwang-Hyun;Kim, Dae-Jin;Bien, Zeung-Nam
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.107-118
    • /
    • 2007
  • This paper proposes a 3D avatar which expresses sign language in a very using lips, facial expression, complexion, pupil motion and body motion as well as hand shape, Hand posture and hand motion to overcome the limitation of conventional sign language avatars from a deaf's viewpoint. To describe motion data of hand and other body components structurally and enhance the performance of databases, we introduce the concept of a hyper sign sentence. We show the superiority of the developed system by a usability test through a questionnaire survey.