• Title/Summary/Keyword: dynamic scene

Search Result 146, Processing Time 0.024 seconds

A study on Web-based Video Panoramic Virtual Reality for Hose Cyber Shell Museum (비디오 파노라마 가상현실을 기반으로 하는 호서 사이버 패류 박물관의 연구)

  • Hong, Sung-Soo;khan, Irfan;Kim, Chang-ki
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.1468-1471
    • /
    • 2012
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient algorithm for Image registration and stitching of captured imaged from a video stream. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is use for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and a nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

A Study on the Dynamic Painterly Stroke Generation for 3D Animation (3차원 애니메이션을 위한 회화적 스트로크의 동적 관리 기법)

  • Lee, Hyo-Keun;Ryoo, Seung-Taek;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.554-568
    • /
    • 2005
  • We suggest the dynamic stroke generation algorithm that provides frame-to-frame coherence in 3D non-photorealistic animations. We use 3D particle system to eliminate the visual popping effect in the animated scene. Since we have located particles on the 3D object's surface, the coherence is maintained when the object or the camera is moving in the scene. Also, this algorithm maintains the coherence when camera is zooming in/out. However, the brush strokes on the surface also zoom in/out. This result(too large or too small brush strokes) can not represent hand-crafted brush strokes. To remove this problem, we suggest stroke generation algorithm that dynamically maintains the number of brush stroke and its size during camera zoom in/out.

  • PDF

Haptic Rendering Technology for Touchable Video (만질 수 있는 비디오를 위한 햅틱 렌더링 기술)

  • Lee, Hwan-Mun;Kim, Ki-Kwon;Sung, Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.691-701
    • /
    • 2010
  • We propose a haptic rendering technology for touchable video. Our touchable video technique allows users for feeling the sense of touch while probing directly on 2D objects in video scenes or manipulating 3D objects brought out from video scenes using haptic devices. In our technique, a server sends video and haptic data as well as the information of 3D model objects. The clients receive video and haptic data from the server and render 3D models. A video scene is divided into small grids, and each cell has its tactile information which corresponds to a specific combination of four attributes: stiffness, damping, static friction, and dynamic friction. Users can feel the sense of touch when they touch directly cells of a scene using a haptic device. Users can also examine objects by touching or manipulating them after bringing out the corresponding 3D objects from the screen. Our touchable video technique proposed in this paper can lead us to feel maximum satisfaction the haptic-audio-vidual effects directly on the video scenes of movies or home-shopping video contents.

Secondary Action based Dynamic Jiggle-Bone Animation (이차 행동 기반의 다이나믹 지글 본 애니메이션)

  • Park, Sung-Jun;An, Deug-Yong;Oh, Seong-Suk
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.127-134
    • /
    • 2010
  • The secondary animation technology for the detailed objects including accessories is being studied and applied to the modern game development. The jiggle-bone deformer is used for 3D graphic tools as a technology to create the animation of these objects, but it is disadvantageous in that the real-time modification is difficult and the graphic developers need much time. The secondary animation can also be realized using a physical game engine, but the cost of animation process increases when many objects in a scene of a game are rendered, and it has a low efficiency. This paper proposes a dynamic jiggle-bone animation algorithm, which can be modified in real time and has the similar effect to the physical game engine. To evaluate the performance of the proposed algorithm, tests were conducted with varied number of bones and for the case of one scene with the animation of many jiggle-bones, and the results were adjudged relatively efficient.

A Constrained Learning Method based on Ontology of Bayesian Networks for Effective Recognition of Uncertain Scenes (불확실한 장면의 효과적인 인식을 위한 베이지안 네트워크의 온톨로지 기반 제한 학습방법)

  • Hwang, Keum-Sung;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.6
    • /
    • pp.549-561
    • /
    • 2007
  • Vision-based scene understanding is to infer and interpret the context of a scene based on the evidences by analyzing the images. A probabilistic approach using Bayesian networks is actively researched, which is favorable for modeling and inferencing cause-and-effects. However, it is difficult to gather meaningful evidences sufficiently and design the model by human because the real situations are dynamic and uncertain. In this paper, we propose a learning method of Bayesian network that reduces the computational complexity and enhances the accuracy by searching an efficient BN structure in spite of insufficient evidences and training data. This method represents the domain knowledge as ontology and builds an efficient hierarchical BN structure under constraint rules that come from the ontology. To evaluate the proposed method, we have collected 90 images in nine types of circumstances. The result of experiments indicates that the proposed method shows good performance in the uncertain environment in spite of few evidences and it takes less time to learn.

Design and Implementation of An MPEG-4 Dynamic Service Framework (MPEG-4 동적서비스 프레임워크 설계 및 구현)

  • 이광의
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.488-493
    • /
    • 2002
  • MPEG-4 movies are composed of several media objects, organized in a hierarchical fashion. Those media objects are served to the clients as elementary streams. To play the movie, client players compose the elementary streams according to the meta- information called the scene graph. The meta-information streams are delivered as BIFS and OD elementary stream. Using dynamically generated BIFS and OD streams, we can provide a service something differs from traditional file services. For example, we can insert weather or stock information into the bottom of the screen while an existing movie was played in the screen. In this paper, we propose a dynamic service framework and dynamic server. Dynamic service framework is an object-oriented framework dynamically generating BIFS and OD streams based on the external DB information. Dynamic server provides a GUI for the server management and interface for registering dynamic services. In this framework, the dynamic service has the same interface as a file service. So, a dynamic service is considered as a file service by clients and other services.

  • PDF

Development of a CAE Middleware and a Visualization System for Supporting Interoperability of Continuous CAE Analysis Data (연속해석 데이터의 상호운용성을 지원하는 CAE 미들웨어와 가시화 시스템의 개발)

  • Song, In-Ho;Yang, Jeong-Sam;Jo, Hyun-Jei;Choi, Sang-Su
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.2
    • /
    • pp.85-93
    • /
    • 2010
  • This paper proposes a CAE data translation and visualization technique that can verify time-varying continuous analysis simulation in a virtual reality (VR) environment. In previous research, the use of CAE analysis data has been problematic because of the lack of any interactive simulation controls for visualizing continuous simulation data. Moreover, the research on post-processing methods for real-time verification of CAE analysis data has not been sufficient. We therefore propose a scene graph based visualization method and a post-processing method for supporting interoperability of continuous CAE analysis data. These methods can continuously visualize static analysis data independently of any timeline; it can also continuously visualize dynamic analysis data that varies in relation to the timeline. The visualization system for continuous simulation data, which includes a CAE middleware that interfaces with various formats of CAE analysis data as well as functions for visualizing continuous simulation data and operational functions, enables users to verify simulation results with more realistic scenes. We also use the system to do a performance evaluation with regard to the visualization of continuous simulation data.

Motion-Based Background Subtraction without Geometric Computation in Dynamic Scenes

  • Kawamoto, Kazuhiko;Imiya, Atsushi;Hirota, Kaoru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.559-562
    • /
    • 2003
  • A motion-based background subtraction method without geometric computation is proposed, allowing that the camera is moving parallel to the ground plane with uniform velocity. The proposed method subtracts the background region from a given image by evaluating the difference between calculated and model Hows. This approach is insensitive to small errors of calculated optical flows. Furthermore, in order to tackle the significant errors, a strategy for incorporating a set of optical flows calculated over different frame intervals is presented. An experiment with two real image sequences, in which a static box or a moving toy car appears, to evaluate the performance in terms of accuracy under varying thresholds using a receiver operating characteristic (ROC) curve. The ROC curves show, in the best case, the figure-ground segmentation is done at 17.8 % in false positive fraction (FPF) and 71.3% in true positive fraction (TPF) for the static-object scene and also at 14.8% in FPF and 72.4% In TPF for the moving-object scene, regardless if the calculated optical flows contain significant errors of calculation.

  • PDF

Motion Visualization of a Vehicle Driver Based on Virtual Reality (가상현실 기반에서 차량 운전자 거동의 가시화)

  • Jeong, Yun-Seok;Son, Kwon;Choi, Kyung-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.5
    • /
    • pp.201-209
    • /
    • 2003
  • Virtual human models are widely used to save time and expense in vehicle safety studies. A human model is an essential tool to visualize and simulate a vehicle driver in virtual environments. This research is focused on creation and application of a human model fer virtual reality. The Korean anthropometric data published are selected to determine basic human model dimensions. These data are applied to GEBOD, a human body data generation program, which computes the body segment geometry, mass properties, joints locations and mechanical properties. The human model was constituted using MADYMO based on data from GEBOD. Frontal crash and bump passing test were simulated and the driver's motion data calculated were transmitted into the virtual environment. The human model was organized into scene graphs and its motion was visualized by virtual reality techniques including OpenGL Performer. The human model can be controlled by an arm master to test driver's behavior in the virtual environment.

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.