• Title/Summary/Keyword: Skin Color Model

Search Result 168, Processing Time 0.031 seconds

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

The Comparative Analysis of 3D Software Virtual and Actual Wedding Dress

  • Yuan, Xin-Yi;Bae, Soo-Jeong
    • Journal of Fashion Business
    • /
    • v.21 no.6
    • /
    • pp.47-65
    • /
    • 2017
  • This study is intended to compare an actual wedding dress being made completely through 3D software, and compare it with an actual dress of a real model by using collective tools for comparative analysis. The method of the study was conducted via a literature review along with the production of the dresses. In the production, two wedding dresses for the small wedding ceremony were designed. Each of the design was made into both 3D and an actual garment. The results are as follows. First, the 3D whole body scanner reflects the measure of the exact human body size, however there were some difficulties in matching what the customer wanted, because the difference of the skin color and the hair style. Second, the pattern of the dress is much more easily altered than it was in the real production. Third, the silhouette of the virtual and the actual person with the dress was nearly the same. Fourth, textile tool was much more convenient because of the use of real-time rendering on the virtual dresses. Lastly, the lace and biz decoration were flat, and the luster was duller than in reality. Prospectively, the consumer will decide their own design of variety through the use of the avatar without wearing the actual dresses, and they would demand what the another one desired, different from the presented ones by making the corrections by themselves. Through this process, the consumer would be actively participating in the design, a step which would finally lead to the two way designing rather than the one way design of present times.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Histochemical Analysis of the Cutaneous Wound Healing in the Amphibian (양서류 피부 상처회복과정에 대한 조직화학적 분석)

  • Lim, Do-Sun;Jeong, Soon-Jeong;Jeong, Je-O;Park, Joo-Cheol;Kim, Heung-Joong;Moon, Myung-Jin;Jeong, Moon-Jin
    • Applied Microscopy
    • /
    • v.34 no.1
    • /
    • pp.1-11
    • /
    • 2004
  • The wound healing is very complex biological processing including inflammatory, reepithelialization and matrix construction. According to the biological systematic category, the ability of the healing is very different. Generally healing ability of the lower animal group has been known more excellent compared to its higher group. Therefore, lower animals have been used as the experimental model to explore the mechanism of the wound healing or repair. To verify histochemical characteristics of the wound healing, we have used skin of the frog (Bombina orientalis) as known common amphibian. At day 1, 10, and 16, the mucous substance was very actively synthesized and strong positive by PAS and Alcian blue (pH 2.5). Day 10 after wounding, margin of the wound was gradually strong positive by PTAH staining for detection of collagen synthesis. At 3 to 6 hour and day 23 to 27, we have found the cell division was active through the MG-P staining, in which the concentration and division of DNA in nucleus was green to deep blue color.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Clinical Experience of $VNUS^{(R)}Closure$ fast in Treatment of Varicose Vein: Comparison with Traditional Radiofrequency Ablation (하지정맥류 치료를 위한 2세대 고주파 열폐쇄술($VNUS^{(R)}Colosure$ fast)과 기존의 고주파 열폐쇄술($VNUS^{(R)}Closure$ plus)의 임상치험 비교 분석)

  • Kim, Woo-Shik;Lee, Jeong-Sang;Jeong, Seong-Cheol;Shin, Vong-Chul
    • Journal of Chest Surgery
    • /
    • v.43 no.6
    • /
    • pp.635-641
    • /
    • 2010
  • Background: Radiofrequency endovenous ablation of incompetent saphenous vein has gaining popularity over the conventional ligation and stripping as a minimally invasive technique. The latest version of radiofrequency endovenous catheter, $VNUS^{\circledR}Colosure$ fast VNUS medical Technologies, San Jose, CA, adopted a segmental ablation system, instead of continous pullback, is designed to reduce treatment time in comparison with the previous model $VNUS^{\circledR}Colosure$ plus VNUS medical Technologies, San Jose, CA. The purpose of this study is to compare the difference between two endovenous radiofrequency ablation systems in terms of treatment efficacy and complication rates. We analyze the initial efficacy and complication rates of $VNUS^{\circledR}Colosure$ fast with $VNUS^{\circledR}Colosure$ plus. Material and Method: Between June 2006 and August 2009, $VNUS^{\circledR}Colosure$ plus was performed to treat varicose vein on 59 limbs in 41. patients and $VNUS^{\circledR}Colosure$ fast was performed on 76 limbs in 67 patients. We retrospectively compared in both group with sex, mean treatment time, mean treatment diameter, conjugated treatment, and complications after the procedure. Result: All patient were symptomatic and diagnosed as varicose vein and underwent level 2 clinical classification with color duplex scan. The mean treatment time for the great saphenous vein was significantly less with $VNUS^{\circledR}Colosure$ fast ($17.0{\pm}6.5min$) than $VNUS^{\circledR}Colosure$ plus ($62.7{\pm}9.8min$). There was no significant difference in 1 yr closure rate between groups (p=0.32). Minor complications such as skin burn, thrombophlebitis, ecchymosis, hematoma, cellulitis, tenderness, and there were not different between the groups. Conclusion: Both $VNUS^{\circledR}Colosure$ fast and $VNUS^{\circledR}Colosure$ plus are effective methods of endovenous saphenous ablation. $VNUS^{\circledR}Colosure$ fast is superior to the previous model with less treatment time preserving compatible efficacy and complications. The efficacy of $VNUS^{\circledR}Colosure$ fast for long term closure rate remains to be established.