• Title/Summary/Keyword: Virtual actor

Search Result 26, Processing Time 0.022 seconds

Standard Model for Live Actor and Entity Representation in Mixed and Augmented Reality (혼합증강현실에서 라이브 행동자와 실체 표현을 위한 표준 모델)

  • Yooa, Kwan-Hee
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.192-199
    • /
    • 2016
  • Mixed and augmented reality technique deals with mixing content between real world and virtual world containing augmented reality and augmented virtuality excluding of pure real and pure virtual world. In mixed and augmented reality, if a live actor and entity moving in real world can be embedded more naturally in 3D virtual world, various advanced applications such 3D tele-presence, 3D virtual experience education and etc can be serviced. Therefore, in this paper, we propose a standard model which is supporting to embed the live actor and entity into 3D virtual space, and to interact with each other. And also the natural embedding and interaction of live actor and entity can be performed based on the proposed model.

The Design and Implementation of Real-time Virtual Image Synthesis System of Map-based Depth (깊이 맵 기반의 실시간 가상 영상합성 시스템의 설계 및 구현)

  • Lee, Hye-Mi;Ryu, Nam-Hoon;Roh, Gwhan-Sung;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1317-1322
    • /
    • 2014
  • To complete an image, it is needed to go through the process to capture the actual actor's motion and compose it with virtual environment. Due to the excessive cost for production or lack of post-processing technology, however, it is mostly conducted by manual labor. The actor plays his role depending on his own imagination at the virtual chromakey studio, and at that time, he has to move considering the possible collision with or reaction to an object that does not exist. And in the process of composition applying CG, when the actor's motion does not go with the virtual environment, the original image may have to be discarded and it is necessary to remake the film. The current study suggested and realized depth-based real-time 3D virtual image composition system to reduce the ratio of remaking the film, shorten the production time, and lower the production cost. As it is possible to figure out the mutual collision or reaction by composing the virtual background, 3D model, and the actual actor in real time at the site of filming, the actor's wrong position or acting can be corrected right there instantly.

Case Study : Cinematography using Digital Human in Tiny Virtual Production (초소형 버추얼 프로덕션 환경에서 디지털 휴먼을 이용한 촬영 사례)

  • Jaeho Im;Minjung Jang;Sang Wook Chun;Subin Lee;Minsoo Park;Yujin Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we introduce a case study of cinematography using digital human in virtual production. This case study deals with the system overview of virtual production using LEDs and an efficient filming pipeline using digital human. Unlike virtual production using LEDs, which mainly project the background on LEDs, in this case, we use digital human as a virtual actor to film scenes communicating with a real actor. In addition, to film the dialogue scene between the real actor and the digital human using a real-time engine, we automatically generated speech animation of the digital human in advance by applying our Korean lip-sync technology based on audio and text. We verified this filming case by using a real-time engine to produce short drama content using real actor and digital human in an LED-based virtual production environment.

A Study on Real-time Graphic Workflow For Achieving The Photorealistic Virtual Influencer

  • Haitao Jiang
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.130-139
    • /
    • 2023
  • With the increasing popularity of computer-generated virtual influencers, the trend is rising especially on social media. Famous virtual influencer characters Lil Miquela and Imma were all created by CGI graphics workflows. The process is typically a linear affair. Iteration is challenging and costly. Development efforts are frequently siloed off from one another. Moreover, it does not provide a real-time interactive experience. In the previous study, a real-time graphic workflow was proposed for the Digital Actor Hologram project while the output graphic quality is less than the results obtained from the CGI graphic workflow. Therefore, a real-time engine graphic workflow for Virtual Influencers is proposed in this paper to facilitate the creation of real-time interactive functions and realistic graphic quality. The real-time graphic workflow is obtained from four processes: Facial Modeling, Facial Texture, Material Shader, and Look-Development. The analysis of performance with real-time graphical workflow for Digital Actor Hologram demonstrates the usefulness of this research result. Our research will be efficient in producing virtual influencers.

Context-Driven Framework for High Level Configuration of Virtual Businesses (가상기업의 형성을 위한 컨텍스트 기반 프레임워크)

  • Lee, Kyung-Huy;Oh, Sang-Bong
    • Journal of Information Technology Applications and Management
    • /
    • v.14 no.2
    • /
    • pp.11-26
    • /
    • 2007
  • In this paper we suggest a context-driven configuration model of virtual businesses to form a business network model consisting of role-based, interaction-centered business partners. The model makes use of the subcontext concept which explicitly represents actors and interactions in virtual business (VB) context. We separate actors who have capacities on tasks in a specific kind of role and actor subcontext which models requirements in specific interaction subcontext. Three kinds of actors are defined in virtual service chains, service user, service provider, and external service supporter. Interaction subcontext models a service exchange process between two actor subcontexts with consideration of context dependencies like task and quality dependencies. Each subcontext may be modeled in the form of a situation network which consists of a finite set of situation nodes and transitions. A specific situation is given in a corresponding context network of actors and interactions. It is illustrated with a simple example.

  • PDF

Real-time Marker-free Motion Capture System to Create an Agent in the Virtual Space (가상 공간에서 에이전트 생성을 위한 실시간 마커프리 모션캡쳐 시스템)

  • 김성은;이란희;박창준;이인호
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.199-202
    • /
    • 2002
  • We described a real-time 3D computer vision system called MIMIC(Motion interface f Motion information Capture system) that can capture and save motion of an actor. This system analyzes input images from vision sensors and searches feature information like a head, hands, and feet. Moreover, this estimates intermediated joints as an elbow and hee using feature information and makes 3D human model having 20 joints. This virtual human model mimics the motion of an actor in real-time. Therefore this system can realize the movement of an actor unaffectedly because of making intermediated joint for complete human body contrary to other marker-free motion capture system.

  • PDF

A Context-based Multi-Agent System for Enacting Virtual Enterprises (가상기업 지원을 위한 컨텍스트 기반 멀티에이전트 시스템)

  • Lee, Kyung-Huy;Kim, Duk-Hyun
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.3
    • /
    • pp.1-17
    • /
    • 2007
  • A virtual enterprise (VE) can be mapped into a multi-agent system (MAS) that consists of various agents with specific role(s), communicating with each other to accomplish common goal(s). However, a MAS for enacting VE requires more advanced mechanism such as context that can guarantee autonomy and dynamism of VE members considering heterogeneity and complex structure of them. This paper is to suggest a context-based MAS as a platform for constructing and managing virtual enterprises. In the Context-based MAS a VE is a collection of Actor, Interaction (among Actors), Actor Context, and Interaction Context. It can raise the speed and correctness of decision-making and operation of VE enactment using context, i.e., information about the situation (e.g., goal, role, task, time, location, media) of Actors and Interactions, as well as simple data of their properties. The Context-based MAS for VE we proposed('VECoM') may consists of Context Ontology, Context Model, Context Analyzer, and Context Reasoner. The suggested approach and system is validated through an example where a VE tries to find a partner that could join co-development of new technology.

  • PDF

Mapless Navigation with Distributional Reinforcement Learning (분포형 강화학습을 활용한 맵리스 네비게이션)

  • Van Manh Tran;Gon-Woo Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.92-97
    • /
    • 2024
  • This paper provides a study of distributional perspective on reinforcement learning for application in mobile robot navigation. Mapless navigation algorithms based on deep reinforcement learning are proven to promising performance and high applicability. The trial-and-error simulations in virtual environments are encouraged to implement autonomous navigation due to expensive real-life interactions. Nevertheless, applying the deep reinforcement learning model in real tasks is challenging due to dissimilar data collection between virtual simulation and the physical world, leading to high-risk manners and high collision rate. In this paper, we present distributional reinforcement learning architecture for mapless navigation of mobile robot that adapt the uncertainty of environmental change. The experimental results indicate the superior performance of distributional soft actor critic compared to conventional methods.

Following Path using Motion Parameters for Virtual Characters

  • Baek, Seong-Min;Jeong, Il-Kwon;Lee, In-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1621-1624
    • /
    • 2003
  • This paper presents a new method that generates a path that has no collision with the obstacles or the characters by using the three motion parameters, and automatically creates natural motions of characters that are confined to the path. Our method consists of three parameters: the joint information parameter, the behavior information parameter, and the environment information parameter. The joint information parameters are extracted from the joint angle data of the character and this information is used when creating a path following motion by finding the relation-function of the parameters on each joint. A user can set the behavior information parameter such as velocity, status, and preference and this information is used for creating different paths, motions, and collision avoidance patterns. A user can create the virtual environment such as road and obstacle, also. The environment is stored as environment information parameters to be used later in generating a path without collision. The path is generated using Hermit-curve and each control point is set at important places.

  • PDF

A Web-based System for Embedding a Live Actor and Entity using X3DOM (X3DOM 을 이용한 라이브 행동자와 실체를 통합하기 위한 웹 기반 시스템)

  • Chheang, Vuthea;Ryu, Ga-Ae;Jeong, Sangkwon;Lee, Gookhwan;Yoo, Kwan-Hee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.1-3
    • /
    • 2016
  • Mixed and augmented reality (MAR) refers to a spatially coordinated combination of media/information components that represent on the real world and its objects, and on the other those that are virtual, synthetic and computer generated including any combination of aural, visual and touch. The extensible 3D (X3D) is the ISO standard for defining 3D interactive web-based 3D content integrated with multimedia. In this paper, we propose a model to integrate live actor and entity that captured from Microsoft Kinect to be represented in Web-based mixed augmented reality world by using X3DOM by which X3D nodes can be integrated seamlessly into HTML5 DOM content.

  • PDF