• Title/Summary/Keyword: Interactive Computer Graphics

Search Result 121, Processing Time 0.022 seconds

Authoring Tool for Mobile Contents based on LASeR (LASeR 기반 모바일 콘텐츠 저작 도구)

  • Kim, Sun-Kyung;Kim, Hee-Sun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.3
    • /
    • pp.31-37
    • /
    • 2008
  • MPEG 4 Part 20 LASeR (ISO/IEC 14496 20) is a specification designed to deliver rich media services in a mobile environment. The specification is an emerging standard that can replace the MPEG 4 BIFS specification designed to deliver PC based heavyweight media contents. The specification describes the representation of scene information in a resource constrained mobile environment. Unlike the BIFS standard designed to deliver heavyweight rich media, the LASeR specification has a restricted description that conforms to the SVG Tiny 1.2 specification. Also, the specification has an advantage of allowing for the efficient conversion of one graphics format to another. In this paper, we present the design and the implementation of a LASeR authoring system that allows for fast and efficient creation of interactive rich media contents in a mobile environment. The Gill interface of the authoring system presented in this paper allows users, who do not have prior knowledge of the scene description language, to conveniently create contents and store the produced scenes using the internal list data structure. The system allows users to navigate scene objects internally stored and to create LASeR XML files in the structured XML format.

  • PDF

Interactive Projection by Closed-loop based Position Tracking of Projected Area for Portable Projector (이동 프로젝터 투사영역의 폐회로 기반 위치추적에 의한 인터랙티브 투사)

  • Park, Ji-Young;Rhee, Seon-Min;Kim, Myoung-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.29-38
    • /
    • 2010
  • We propose an interactive projection technique to display details of a large image in a high resolution and brightness by tracking a portable projector. A closed-loop based tracking method is presented to update the projected image while a user changes the position of the detail area by moving the portable projector. A marker is embedded in the large image to indicate the position to be occupied by the detail image projected by the portable projector. The marker is extracted in sequential images acquired by a camera attached to the portable projector. The marker position in the large display image is updated under a constraint that the center positions of marker and camera frame coincide in every camera frame. The image and projective transformation for warping are calculated using the marker position and shape in the camera frame. The marker's four corner points are determined by a four-step segmentation process which consists of camera image preprocessing based on HSI, edge extraction by Hough transformation, quadrangle test, and cross-ratio test. The interactive projection system implemented by the proposed method performs at about 24fps. In the user study, the overall feedback about the system usability was very high.

Development of Mobile Volume Visualization System (모바일 볼륨 가시화 시스템 개발)

  • Park, Sang-Hun;Kim, Won-Tae;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.286-299
    • /
    • 2006
  • Due to the continuing technical progress in the capabilities of modeling, simulation, and sensor devices, huge volume data with very high resolution are common. In scientific visualization, various interactive real-time techniques on high performance parallel computers to effectively render such large scale volume data sets have been proposed. In this paper, we present a mobile volume visualization system that consists of mobile clients, gateways, and parallel rendering servers. The mobile clients allow to explore the regions of interests adaptively in higher resolution level as well as specify rendering / viewing parameters interactively which are sent to parallel rendering server. The gateways play a role in managing requests / responses between mobile clients and parallel rendering servers for stable services. The parallel rendering servers visualize the specified sub-volume with rendering contexts from clients and then transfer the high quality final images back. This proposed system lets multi-users with PDA simultaneously share commonly interesting parts of huge volume, rendering contexts, and final images through CSCW(Computer Supported Cooperative Work) mode.

The Simulator Design for the Analysis of Aircraft Longitudinal Dynamic Characteristics (항공기 세로 동특성 해석을 위한 시뮬레이터 설계)

  • Yoon, Sun-Ju
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.4
    • /
    • pp.427-436
    • /
    • 2006
  • State-space method for the analysis of the dynamic characteristics of a body motion is set up as mathematical tool for the solution of differential equation by computer. Representation of a system is described as a simple form of matrix calculation and unique form of model is available for the linear or nonlinear, time variant or time invariant, mono variable or multi variable system etc. For the analysis of state-space method a complicated vector calculation is required, but this analysis can be simplified with the specific functions of a software package. Recently as the Graphical User Interface softwares are well-developed, then it is very simplified to execute the simulation of the dynamic characteristics for the state-space model with the interactive graphics treatment. The purpose of this study is to developed the simulator for the educational analysis of the dynamic characteristics of body motion, and for the analysis of the longitudinal dynamic characteristics of an aircraft that is primarily to design the simulator for the analysis of the transient response of an aircraft longitudinal stability.

  • PDF

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

The Effect of Data-Guided Artificial Wind in a Yacht VR Experience on Positive Affect (요트 VR 체험에서 데이터 기반의 인공풍이 정적 정서에 미치는 영향)

  • Cho, Yesol;Lee, Yewon;Lim, Dojeon;Ryoo, Taedong;Jonas, John Claud;Na, Daeyoung;Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.67-77
    • /
    • 2022
  • The sense of touch by natural wind is one of the most common feels that every person experiences in daily life. However, it has been rarely studied how natural wind can be reproduced in a VR environment and whether the multisensory contents equipped with artificial winds do improve human emotion or not. To address these issues, we first propose a wind reproduction VR system guided by video and wind capture data and also study the effect of the system on positive affect. We collected wind direction and speed data together with a 360-degree video on a yacht. These pieces of data were used to produce a multisensory VR environment by our wind reproduction VR system. 19 college students participated in the experiments, where the Korean version of Positive and Negative Affect Schedule (K-PANAS) was introduced to measure their emotions. Through the K-PANAS, we found that 'inspired' and 'active' emotions increase significantly after experiencing the yacht VR contents with artificial wind. Our experimental results also show that another emotion, 'interested', is most notably affected depending on the presence of the wind. The presented system can be effectively used in various VR applications such as interactive media and experiential contents.

Developing a Visual Programming Language-based Three-dimensional Virtual Reality Authoring Tool to Compose Virtual Interior Space (실내공간구성을 위한 시각 프로그래밍 언어 기반 3차원 가상현실 저작도구 개발에 관한 연구)

  • Park Hyeon-Soo;Park Sungjun;Kim Jee-in;Park Jae Wan
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.5 s.52
    • /
    • pp.254-261
    • /
    • 2005
  • This paper presents an attempt to develop a visual programming language-based 3D virtual reality authoring tool intended to compose virtual interior space. The rapid development of digital technology and the wide spread of the Intenet have expanded the different uses of virtual reality in a number of applications ranging from interior design to building maintenance. In particular, the construction of cyber spaces based on existing interior spaces is becoming increasingly important. Current research, however, remains at the level of converting 3D models into virtual reality models, despite practitioners' needs for structural space models. Moreover, commercial tools to build virtual reality space have the disadvantage of targeting people who have professional knowledge of computer programs and computer graphics. Accordingly, the 3D virtual reality authoring tool developed in this research - called the VESL system - enables virtual and structural space to be easily composed using intuitive and interactive visual interfaces, which are based on visual programming techniques. The VESL system also provides an XML based semantic description of interior space, to be used to describe interior space information. We anticipate that the virtual reality spaces composed by this system will be of considerable use in the fields of architecture and interior design. Further research issues identified at the end of the research include developing a converter/filter for transforming Internet virtual reality standard language, or VRML, and evaluating the application of the system for practical use.

A Study on Web-based Collaborative CAD System (웹 기반 협동 CAD시스템에 관한 연구)

  • 윤보열;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.4
    • /
    • pp.767-773
    • /
    • 2000
  • As computer systems and communication technologies develop rapidly, CSCW(Computer Supported Collaborative Work) system appears nowadays, through which it is available to work on virtual space without any restriction of time and place. Most of CWCS systems depend on a special network and groupware. The systems of graphics and CAD are not so many because they are specified by hardware and application software. In this paper, we propose a web-based collaborative CAD system, which can be jointly worked on Internet WWW being independent from any platforms. It can create and modify 3D objects easily using VRML and Java 3D API, and it can send, print, and store them. The interactive work for designing objects can be also carried out through chatting with each other. This system is executed in the environment of Client/server architecture. Clients connect to the CAD sewer through Java applet on WWW. The server is implemented by Java application, and it consists of three components : connection manager which controls the contact to users, work manager which keeps viewing in concurrency and provides virtual work space sharing with others, and solid modeler which creates 3D object.

  • PDF

Development of An Interactive System Prototype Using Imitation Learning to Induce Positive Emotion (긍정감정을 유도하기 위한 모방학습을 이용한 상호작용 시스템 프로토타입 개발)

  • Oh, Chanhae;Kang, Changgu
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.239-246
    • /
    • 2021
  • In the field of computer graphics and HCI, there are many studies on systems that create characters and interact naturally. Such studies have focused on the user's response to the user's behavior, and the study of the character's behavior to elicit positive emotions from the user remains a difficult problem. In this paper, we develop a prototype of an interaction system to elicit positive emotions from users according to the movement of virtual characters using artificial intelligence technology. The proposed system is divided into face recognition and motion generation of a virtual character. A depth camera is used for face recognition, and the recognized data is transferred to motion generation. We use imitation learning as a learning model. In motion generation, random actions are performed according to the first user's facial expression data, and actions that the user can elicit positive emotions are learned through continuous imitation learning.

Shadow Removal in Front Projection System using a Depth Camera (깊이 카메라를 이용한 전방 프로젝션 환경에서 그림자 제거)

  • Kim, Jaedong;Seo, Hyunggoog;Cha, Seunghoon;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.1-10
    • /
    • 2015
  • One way to create a visually immersive environment is to utilize a front projection system. Especially, when enough space is not available behind the screen, it becomes difficult to install a back projection system, making the front projection an appropriate choice. A drawback associated with the front projection is, however, the interference of shadow. The shadow can be cast on the screen when the user is located between the screen and the projector. This shadow can negatively affect the user experience and reduce the sense of immersion by removing important information. There have been various attempts to eliminating shadows cast on the screen by using multiple projectors that compensate for each other with missing information. There is trade-off between calculataion time and desired accuracy in this mutual compensation. Accurate estimation of the shadow usually requires heavy computation while simple approaches suffer from inclusion of non-shadow regions in the result. We propose a novel approach to removing shadows created in the front projection system using the skeleton data obtained from a depth camera. The skeleton data helps accurately extract the shape of the shadow that the user cast without requiring much computation. Our method also utilizes a distance field to remove the afterimage of shadow that may occur when the user moves. We verify the effectiveness of our system by performing various experiments in an interactive environment created by a front projection system.