• Title/Summary/Keyword: Interactive animation

Search Result 149, Processing Time 0.025 seconds

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

Comic-Book Style Rendering for Game (게임을 위한 코믹북 스타일 렌더링)

  • Kim, Tae-Gyu;Oh, Gyu-Hwan;Lee, Chang-Shin
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.81-92
    • /
    • 2007
  • Nowadays, many computer games based on NPR(Non-photorealistic Rendering) techniques have been developed due to their distinctive visual properties. However, only limited methods of NPR techniques have been exploited in producing computer games and amongst them cartoon-style rendering techniques especially has had the special interest. In the paper, we suggest an effective rendering method of comic-book style that will be applicable to computer game. In order to do, we first characterize the properties of comic-book from comparing two visuals: celluloid animation and comic-book. We then suggest a real-time rendering method of comic-book style represented by outline sketch, tone, and hatching. We finally examine its effectiveness by observing the game developed using the method.

  • PDF

Following media development, a Study about the convergence of comics and multimedia (매체발달에 따른 만화의 멀티미디어와의 융합에 관한 연구)

  • Kim, Bo-Hyun;Hong, Nan-Ji
    • Journal of Digital Contents Society
    • /
    • v.13 no.1
    • /
    • pp.119-127
    • /
    • 2012
  • In this study, it was witnessed that a variety of tests are implemented in a type of convergence of multimedia such as photos, sounds, and videos as well as letters and drawings, components of existing traditional comics as comics are digitalized and are converted to various devices. Therefore, we studied the concept of multimedia comics as a basis of this study by judging that new barometer to comics lies in convergence with such multimedia. After recognizing components of multimedia comics which are currently emerging, we categorized them into three types depending on how to use these elements. First, convergence type webtoon has a very similar format with existing vertical scrolling webtoon and has characteristics that background & effects sounds are added to emphasize the features of webtoon, or photos or videos are inserted in part, and there is no function to control these elements; Second, motion comic, a medium format between comic and animation has a characteristic that sound, video, paging are auto-played like watching animation but it keeps the format of comics within one frame; Third, interactive comic has a characteristic that effects sound, motion, and story are made by active participation of viewers. As a result of analyzing comics which having above multimedia characteristics, its implications are as followings: First, multimedia elements should be used depending on genre, age, and media; Second, high level of control technology considering the features of comic-viewers is needed. In other words, in continuously evolving media environments, comic contents being proper to targets and use purposes of viewers should be developed. For this, multimedia elements of comics should be used in order that comic-viewers can have active & interactive communication with contents.

Research and Development of Interactive Exhibition Contents for 'Sound Light' Exhibition Space in Science Museum (과학관 '소리 빛' 전시공간, 체험형 인터랙션 전시 콘텐츠 연구 개발)

  • Kim, Tae-Wook;Park, Nam-Ki
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.7
    • /
    • pp.137-144
    • /
    • 2020
  • Based on the basic concepts and roles of the Science Museum, the research and development of the "Sound Light" interaction exhibition contents and experience exhibition space aimed at providing the exhibition, education and experience of scientific principle directly related to daily life will be implemented in the "Sound Light" exhibition space of the Gwangju National Science Museum. The scope of the research is to define the conditions and elements of the museum's hands-on exhibition by examining the case and status of the existing science museum's experience-type content prior research, and research and development of experience-type exhibition scenarios and contents for children based on them. The results of this research and development content are firstly developed with the theme of light and sound as interactive hologram experience content. Second, by multi-faceted media facade through projection mapping by multiple projectors, visual wide and spectacular screen composition and animation are realized. Third, visitors-oriented exhibitions and experiences that can interact with visitors by moving various colors and sounds together. Finally, interactive content is provided through hologram interfaces through hologram screens to encourage active participation of many visitors in viewing rather than simply delivering exhibition information and to promote revisiting the exhibition. Through a series of studies, it was possible to research and develop contents and experience exhibition spaces with theme park characteristics, which are the trend of science museums.

3D Flight Path Creation using Sketch Input and Linear Spline Curves (스케치 입력과 선형 스플라인 곡선을 이용한 3D 항공경로 생성 방법)

  • Choi, Jung-Il;Park, Tae-Jin;Sohn, Ei-Sung;Jeon, Jae-Woong;Choy, Yoon-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1373-1381
    • /
    • 2010
  • Current flight maneuver diagram used by pilots is based on 2D spatial presentation, so it has limitation on display 3D flight information and hard to understand it instinctively. Flight animation authoring tools for this diagram are complex to use and lack useful features like non-linear editing of flight path and real-time interactivity on multiple aircrafts. This research focuses on 3D flight path generation method in the animation system for flight maneuver education. This research combines initial sketch input on 2D diagram with the thrust of an aircraft to generate 3D linear spline as close as to real flight. Using suggested linear spline creation method, the flight path can be visualized, edited, and animated in real-time at the flight maneuver briefing and debriefing.

Computing Fast Secondary Skin Deformation of a 3D Character using GPU (GPU를 이용한 3차원 캐릭터의 빠른 2차 피부 변형 계산)

  • Kim, Jong-Hyuk;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.55-62
    • /
    • 2012
  • This paper presents a new method to represent the secondary deformation effect using simple mass-spring simulation on the vertex shader of the GPU. For each skin vertex of a 3D character, a zero-length spring is connected to a virtual vertex that is to be rendered. When a skin vertex changes its position and velocity according to the character motion, the position of the corresponding virtual vertex is computed by mass-spring simulation in parallel on the GPU. The proposed method represents the secondary deformation effect very fast that shows the material property of a character skin during the animation. Applying the proposed technique dynamically can represent squash-and-stretch and follow-through effects which have been frequently shown in the traditional 2D animation, within a very small amount of additional computation. The proposed method is applicable to represent elastic skin deformation of a virtual character in an interactive animation environment such as games.

A Study on effective directive technique of 3D animation in Virtual Reality -Focus on Interactive short using 3D Animation making of Unreal Engine- (가상현실에서 효과적인 3차원 영상 연출을 위한 연구 -언리얼 엔진의 영상 제작을 이용한 인터렉티브 쇼트 중심으로-)

  • Lee, Jun-soo
    • Cartoon and Animation Studies
    • /
    • s.47
    • /
    • pp.1-29
    • /
    • 2017
  • 360-degree virtual reality has been a technology that has been available for a long time and has been actively promoted worldwide in recent years due to development of devices such as HMD (Head Mounted Display) and development of hardware for controlling and executing images of virtual reality. The production of the 360 degree VR requires a different mode of production than the traditional video production, and the matters to be considered for the user have begun to appear. Since the virtual reality image is aimed at a platform that requires enthusiasm, presence and interaction, it is necessary to have a suitable cinematography. In VR, users can freely enjoy the world created by the director and have the advantage of being able to concentrate on his interests during playing the image. However, the director had to develope and install the device what the observer could concentrate on the narrative progression and images to be delivered. Among the various methods of transmitting images, the director can use the composition of the short. In this paper, we will study how to effectively apply the technique of directing through the composition of this shot to 360 degrees virtual reality. Currently, there are no killer contents that are still dominant in the world, including inside and outside the country. In this situation, the potential of virtual reality is recognized and various images are produced. So the way of production follows the traditional image production method, and the shot composition is the same. However, in the 360 degree virtual reality, the use of the long take or blocking technique of the conventional third person view point is used as the main production configuration, and the limit of the short configuration is felt. In addition, while the viewer can interactively view the 360-degree screen using the HMD tracking, the configuration of the shot and the connection of the shot are absolutely dependent on the director like the existing cinematography. In this study, I tried to study whether the viewer can freely change the cinematography such as the composition of the shot at a user's desired time using the feature of interaction of the VR image. To do this, 3D animation was created using a game tool called Unreal Engine to construct an interactive image. Using visual scripting of Unreal Engine called blueprint, we create a device that distinguishes the true and false condition of a condition with a trigger node, which makes a variety of shorts. Through this, various direction techniques are developed and related research is expected, and it is expected to help the development of 360 degree VR image.

The Solution of Content Creation for User Side Using Animation Management System (애니메이션 공정관리시스템을 활용한 사용자 중심 콘텐츠 생성 방안)

  • Lim, Yang-Mi;Kim, Sung-Rea;Kim, Ho-Sung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.163-167
    • /
    • 2006
  • The previous media has the function that was information of the contents was delivered by only content providers, but the current media makes the interactive function between content providers and general receivers. For this reason, the role of receivers are also expanded in the realm of multimedia and receivers have the roles of the sending and getting information for content user, not more receiver. The constituent members relative to content are content providers, users and user-created content. This paper introduces "the wonderland" the animation process management system for high-level user group. Also, most of images which are made of the animation process are publicly accessible, providing the public or the average user group for making resources easily by using "the wonderland." Although, the general user can get an environment to reedit their own new movie with making the most of provided movie sources are scene and cut units, according to one's taste and one's idea. The materialization of process management system can expand dramatically enough contents, and save the time to edit digital animation or movie, control numerous sources, made an animation processing, and the result output. Such "the wonderland" will significantly help to create future content.

  • PDF

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.