• Title/Summary/Keyword: virtual character

Search Result 225, Processing Time 0.023 seconds

Study of expression in virtual character of facial smile by emotion recognition (감성인식에 따른 가상 캐릭터의 미소 표정변화에 관한 연구)

  • Lee, Dong-Yeop
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.383-402
    • /
    • 2013
  • In this study, we apply the facial Facial Action Coding System for coding the muscular system anatomical approach facial expressions to be displayed in response to a change in sensitivity. To verify by applying the virtual character the Duchenne smile to the original. I extracted the Duchenne smile by inducing experiment of emotion (man 2, woman 2) and the movie theater department students trained for the experiment. Based on the expression that has been extracted, I collect the data of the facial muscles. Calculates the frequency of expression of the face and other parts of the body muscles around the mouth and lips, to be applied to the virtual character of the data. Orbicularis muscle to contract end of lips due to shrinkage of the Zygomatic Major is a upward movement, cheek goes up, the movement of the muscles, facial expressions appear the outer eyelid under the eye goes up with a look of smile. Muscle movement of large muscle and surrounding Zygomatic Major is observed together (AU9) muscles around the nose and (AU25, AU26, AU27) muscles around the mouth associated with openness. Duchen smile occurred in the form of Orbicularis Oculi and Zygomatic Major moves at the same time. Based on this, by separating the orbicularis muscle that is displayed in the form of laughter and sympathy to emotional feelings and viable large muscle by the will of the person, by applying to the character of the virtual, and expression of human I try to examine expression of the virtual character's ability to distinguish.

Motion Patches (모션 패치)

  • Choi, Myung-Geol;Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.119-127
    • /
    • 2006
  • Real-time animation of human figures in virtual environments is an important problem in the context of computer games and virtual environments. Recently, the use of large collections of captured motion data have added increased realism in character animation. However, assuming that the virtual environment is large and complex, the effort of capturing motion data in a physical environment and adapting them to an extended virtual environment is the bottleneck for achieving interactive character animation and control. We present a new technique for allowing our animated characters to navigate through a large virtual environment, which is constructed using a small set of building blocks. The building blocks can be tiled or aligned with a repeating pattern to create a large environment. We annotate each block with a motion patch, which informs what motions are available for animated characters within the block. We demonstrate the versatility and flexibility of our approach through examples in which multiple characters are animated and controlled at interactive rates in large, complex virtual environments.

Realtime Attention System of Autonomous Virtual Character using Image Feature Map (시각적 특징 맵을 이용한 자율 가상 캐릭터의 실시간 주목 시스템)

  • Cha, Myaung-Hee;Kim, Ky-Hyub;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.745-756
    • /
    • 2009
  • An autonomous virtual character can conduct itself like a human after recognizing and interpreting the virtual environment. Artificial vision is mainly used in the recognition of the environment for a virtual character. The present artificial vision that has been developed takes all the information at once from everything that comes into view. However, this can reduce the efficiency and reality of the system by saving too much information at once, and it also causes problems because the speed slows down in the dynamic environment of the game. Therefore, to construct a vision system similar to that of humans, a visual observation system which saves only the required information is needed. For that reason, this research focuses on the descriptive artificial intelligence engine which detects the most important information visually recognized by the character in the virtual world and saves it into the memory by degrees. In addition, a visual system is constructed in accordance with an image transaction theory to make it sense and recognize human feelings. This system finds the attention area of moving objects quickly and effectively through the experiment of the virtual environment with three dynamic dimensions. Also the experiment enhanced processing speed more than 1.6 times.

  • PDF

Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character (로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석)

  • Jang, Seyun;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

Development of A News Event Reenactment System (사건재연 시스템 개발)

  • 윤여천;변혜원;전성규;박창섭
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.21-27
    • /
    • 2002
  • This paper presents a mews event reenactment system (NERS), which generates virtual character animations in a quick and convenient manner. Thus, NERS can be used to produce computer graphics(CG) scenes of news events that are hard to photograph, such as fire, traffic accident, cases of murder, and so on. By using plenty of captured motion data and CG model data this system produces an appropriate animation of virtual characters straightforwardly without any motion capturing device and actors in the authoring stage. NERS is designed to be capable of making virtual characters move along user-defined paths, stitching motions smoothly and modifyingthe positions and of the articulations of a virtual character in a specific frame. Therefore a virtual character can be controlled precisely so as to interact with the virtual environments and other characters. NERS provides both an interactive and script-based (MEL: Maya Embedded Language) interface so that user can this system in a convenient way. This system has been implemented as a plug-in of commercial CG tool, Maya (Alias/wavefront), in order to make use of its advanced functions

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

Computing Fast Secondary Skin Deformation of a 3D Character using GPU (GPU를 이용한 3차원 캐릭터의 빠른 2차 피부 변형 계산)

  • Kim, Jong-Hyuk;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.55-62
    • /
    • 2012
  • This paper presents a new method to represent the secondary deformation effect using simple mass-spring simulation on the vertex shader of the GPU. For each skin vertex of a 3D character, a zero-length spring is connected to a virtual vertex that is to be rendered. When a skin vertex changes its position and velocity according to the character motion, the position of the corresponding virtual vertex is computed by mass-spring simulation in parallel on the GPU. The proposed method represents the secondary deformation effect very fast that shows the material property of a character skin during the animation. Applying the proposed technique dynamically can represent squash-and-stretch and follow-through effects which have been frequently shown in the traditional 2D animation, within a very small amount of additional computation. The proposed method is applicable to represent elastic skin deformation of a virtual character in an interactive animation environment such as games.

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

Comparison of the Size of objects in the Virtual Reality Space and real space (가상현실 공간상에서 물체의 크기와 실제 크기간의 비교연구)

  • Kim, Yun-Jung
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.383-398
    • /
    • 2017
  • Virtual Reality contents are being used as media in various fields. In order for the virtual reality contents to be realistic, the scale of the objects in the virtual reality must be the same as the actual size, and the user must feel the same size. However, even if the size of the character in the virtual reality space is made equal to the size in comparison with the size of the character in the reality, the distortion of the size can occur when the user looks at the object in the image with the HMD. In this paper, I investigate the requirements related to size in virtual reality, and try to find out what difference these requirements have in virtual reality and how the difference affects users. Experiments and surveys to compare the size of objects in virtual reality space and the size of objects in real space were conducted to investigate how scale distortion occurs at distant and near places. I hope that this paper will be a useful research for virtual reality developers.