• Title/Summary/Keyword: avatar control

Search Result 63, Processing Time 0.024 seconds

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

The Evaluation of the Work Motion Suitability of Men's Coverall Type Painting Work Clothes Using 3D Virtual Clothing Simulation (3차원 가상착의 시스템을 활용한 남성용 커버롤 도장 작업복의 작업동작 적합성 평가)

  • Park, Gin Ah
    • Journal of Fashion Business
    • /
    • v.24 no.4
    • /
    • pp.63-84
    • /
    • 2020
  • It is essential to consider the heavy industrial working environment factors which are regarded as harmful to workers' health and safety and suitable work motion factors for the workers' motion while developing the work clothes for painting workers in the machinery and shipbuilding industries. This study suggests the use of 3D virtual clothing simulations as a solution to protect the human body from hazardous working conditions accompanying the development of painting work clothes and assessing the work motion performance associated with the comfort while workers wear them during the work clothes. The initial aim of the study is to examine a male avatar to run work motions simultaneously within a 3D virtual clothing simulator, secondly, to present the simulation images of coverall type men's painting work clothes with the application of two experimental painting work motions and one control motion to the avatar, and finally, to present the distance analysis images of the painting work clothes and the avatar body and air gap rates through the analysis of cross-sections of the avatar body while wearing the coverall work clothes according to the work motions. The results showed that the distance degree of painting work clothes to the avatar body for each part of the human body when performing painting work motions. Moreover, 3D virtual clothing simulations enabled the creation of a male model avatar to run painting work motions together and the painting work clothes developed were found to be suitable for the painting work motions.

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

대학생들의 3D 가상현실을 이용한 채팅의 영어학습 효과

  • Lee, Seon-Hye;Jeong, Dong-Bin
    • English Language & Literature Teaching
    • /
    • v.16 no.1
    • /
    • pp.233-257
    • /
    • 2009
  • The purpose of the present study was to examine the effect of 3D avatar-based Virtual Reality chatting on college students' English learning achievement. Forty college sophomore students participated at this study and the research lesson continued for 8 weeks. They were administered to take a pretest to evaluate their vocabulary knowledge and writing skills. Their progress was assessed on the basis of midterm. Finally, a post-questionnaire was administered to measure their interest and understanding toward their learning experience in 3D avatar-based Virtual chatting. The results of this study indicated that there was a statistically significant difference between the control group and experimental group with respect to their midterm scores and writing. The result of the post-survey indicated the interest of English was statistically higher than the understanding of English.

  • PDF

Development of 'Children's Food Avatar' Application for Dietary Education (식생활교육용 '어린이 푸드 아바타' 애플리케이션 개발)

  • Cho, Joo-Han;Kim, Sook-Bae;Kim, Soon-Kyung;Kim, Mi-Hyun;Kim, Gap-Soo;Kim, Se-Na;Kim, So-Young;Kim, Jeong-Weon
    • Korean Journal of Community Nutrition
    • /
    • v.18 no.4
    • /
    • pp.299-311
    • /
    • 2013
  • An educational application (App) called 'Children's Food Avatar' was developed in this study by using a food DB of nutrition and functionality from Rural Development Administration (RDA) as a smart-learning mobile device for elementary school students. This App was designed for the development of children's desirable dietary habits through an on-line activity of food choices for a meal from food DB of RDA provided as Green Water Mill guide. A customized avatar system was introduced as an element of fun and interactive animation for children which provides nutritional evaluation of selected foods by changing its appearance, facial look, and speech balloon, and consequently providing chances of correcting their food choices for balanced diet. In addition, nutrition information menu was included in the App to help children understand various nutrients, their function and healthy dietary life. When the App was applied to 54 elementary school students for a week in November, 2012, significant increases in the levels of knowledge, attitude and behavior in their diet were observed compared with those of the control group (p < 0.05, 0.01). Both elementary students and teachers showed high levels of satisfaction ranging from 4.30 to 4.89 for the App, therefore, it could be widely used for the dietary education for elementary school students as a smart-learning device.

Face Detection for Automatic Avatar Creation by using Deformable Template and GA (Deformable Template과 GA를 이용한 얼굴 인식 및 아바타 자동 생성)

  • Park Tae-Young;Kwon Min-Su;Kang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.110-115
    • /
    • 2005
  • This paper proposes the method to detect contours of a face, eyes and a mouth in a color image for making an avatar automatically. First, we use the HSI color model to exclude the effect of various light condition, and we find skin regions in an input image by using the skin color is defined on HS-plane. And then, we use deformable templates and Genetic Algorithm(GA) to detect contours of a face, eyes and a mouth. Deformable templates consist of B-spline curves and control point vectors. Those can represent various shape of a face, eyes and a mouth. And GA is very useful search procedure based on the mechanics of natural selection and natural genetics. Second, an avatar is created automatically by using contours and Fuzzy C-means clustering(FCM). FCM is used to reduce the number of face color As a result, we could create avatars like handmade caricatures which can represent the user's identity, differing from ones generated by the existing methods.

A Study on the Establishment of Metaverse-based Police Education and Training Model (메타버스 기반 경찰 교육훈련모델 구축 방안에 관한 연구)

  • Oh, Seiyouen
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.3
    • /
    • pp.487-494
    • /
    • 2022
  • Purpose: This study proposes a Metaverse-based police education and training model that can efficiently improve the performance of various police activities according to changes in the environment of the times. Method: The structure of this system can generate Avatar Controller expressed using HMD and haptic technology, access the Network Interface, and educate and train individually or on a team basis through the command control module, education and training content module, and analysis module. Result: In the proposed model of this study, the command and control module was incorporated into individual or team-based education and training, enabling organic collaborative training among team members by monitoring the overall situation of terrorism or crime in real time. Conclusion: Metaverses-based individual or team-based police education and training can provide a more efficient and safe education and training environment based on immersion, interaction, and rapid judgment in various situations.

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

The Control of Avatar Motion using Hand Gestures (손 제스처를 이용한 아바타 동작 제어)

  • 이찬수;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.04a
    • /
    • pp.124-129
    • /
    • 1998
  • 제스처는 사람들의 의사전달의 자연스러운 수단으로 컴퓨터와 사람의 자연스러운 인터페이슬를 제공할 수 있다. 본 연구에서는 적절한 움직임을 생성하기 위한 명령을 내리기 위하여 손제스터 인식 시스템을 이용하였다. 가상환경에서 아바타의 10가지 기본 동작을 생성하기 위해서 16가지의 제스처를 정의하고, 이 제스처의 인식에 의한 아바타의 움직임을 생성한다.

  • PDF

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.