• Title/Summary/Keyword: Dynamic Scene

Search Result 145, Processing Time 0.023 seconds

Block Erection Simulation in Shipbuilding Using the Open Dynamics Module and Graphics Module (범용 동역학 모듈과 가시화 모듈을 이용한 조선 블록 탑재 시뮬레이션)

  • Cha, Ju-Hwan;Roh, Myung-Il;Lee, Kyu-Yeul
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.2
    • /
    • pp.69-76
    • /
    • 2009
  • The development of a simulation system requires many sub modules such as a dynamic module, a visualization module, etc. If a different freeware is used for each sub modules, it is hard to develop the simulation system by incorporating them because they use their own data structures. To solve this problem, a high-level data structure, called Dynamics Scene Graph Data structure (DSGD) is proposed, by wrapping data structures of two freeware; an Open Dynamics Engine (ODE) for the dynamic module and an Open Scene Graph (OSG) for the visualization module. Finally, to evaluate the applicability of the proposed data structure, it is applied to the block erection simulation in shipbuilding. The result shows that it can be used for developing the simulation system.

VIDEO INPAINTING ALGORITHM FOR A DYNAMIC SCENE

  • Lee, Sang-Heon;Lee, Soon-Young;Heu, Jun-Hee;Lee, Sang-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.114-117
    • /
    • 2009
  • A new video inpainting algorithm is proposed for removing unwanted objects or error of sources from video data. In the first step, the block bundle is defined by the motion information of the video data to keep the temporal consistency. Next, the block bundles are arranged in the 3-dimensional graph that is constructed by the spatial and temporal correlation. Finally, we pose the inpainting problem in the form of a discrete global optimization and minimize the objective function to find the best temporal bundles for the grid points. Extensive simulation results demonstrate that the proposed algorithm yields visually pleasing video inpainting results even in a dynamic scene.

  • PDF

Deep Reference-based Dynamic Scene Deblurring

  • Cunzhe Liu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.653-669
    • /
    • 2024
  • Dynamic scene deblurring is a complex computer vision problem owing to its difficulty to model mathematically. In this paper, we present a novel approach for image deblurring with the help of the sharp reference image, which utilizes the reference image for high-quality and high-frequency detail results. To better utilize the clear reference image, we develop an encoder-decoder network and two novel modules are designed to guide the network for better image restoration. The proposed Reference Extraction and Aggregation Module can effectively establish the correspondence between blurry image and reference image and explore the most relevant features for better blur removal and the proposed Spatial Feature Fusion Module enables the encoder to perceive blur information at different spatial scales. In the final, the multi-scale feature maps from the encoder and cascaded Reference Extraction and Aggregation Modules are integrated into the decoder for a global fusion and representation. Extensive quantitative and qualitative experimental results from the different benchmarks show the effectiveness of our proposed method.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

A Study on Transfer form in Action Scene of Animation (애니메이션 액션장면의 이동형태에 관한 연구)

  • 오정석;윤호창;고상미
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.11a
    • /
    • pp.460-467
    • /
    • 2003
  • According to Korea Socity of Cartoon&Animation studies research material, consumer's preference degree about animation genre answered that investigation target's 50% is doing basis in interest. Animation has various factor of interest induction, but we can count as person's action scene among them. Laying stress on Japan animation 'Offended ghost princess' and 'New century Ebangerion', this research Analyze identifying special feature and difference of transfer form of person(object) that show in cut away of action scene, and Recognized whether diagnostic expressions by the result influence as dynamic element to actuality consumers.

  • PDF

An Camera Information Detection Method for Dynamic Scene (Dynamic scene에 대한 카메라 정보 추출 기법)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.275-280
    • /
    • 2013
  • In this paper, a new stereo object extraction algorithm using a block-based MSE (mean square error) algorithm and the configuration parameters of a stereo camera is proposed. That is, by applying the SSD algorithm between the initial reference image and the next stereo input image, location coordinates of a target object in the right and left images are acquired and then with these values, the pan/tilt system is controlled. And using the moving angle of this pan/tilt system and the configulation parameters of the stereo camera system, the mask window size of a target object is adaptively determined. The newly segmented target image is used as a reference image in the next stage and it is automatically updated in the course of target tracking basing on the same procedure. Meanwhile, a target object is under tracking through continuously controlling the convergence and FOV by using the sequentiall extracted location coordinates of a moving target.

Background Subtraction in Dynamic Environment based on Modified Adaptive GMM with TTD for Moving Object Detection

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.372-378
    • /
    • 2015
  • Background subtraction is the first processing stage in video surveillance. It is a general term for a process which aims to separate foreground objects from a background. The goal is to construct and maintain a statistical representation of the scene that the camera sees. The output of background subtraction will be an input to a higher-level process. Background subtraction under dynamic environment in the video sequences is one such complex task. It is an important research topic in image analysis and computer vision domains. This work deals background modeling based on modified adaptive Gaussian mixture model (GMM) with three temporal differencing (TTD) method in dynamic environment. The results of background subtraction on several sequences in various testing environments show that the proposed method is efficient and robust for the dynamic environment and achieves good accuracy.

Semantic Visual Place Recognition in Dynamic Urban Environment (동적 도시 환경에서 의미론적 시각적 장소 인식)

  • Arshad, Saba;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.334-338
    • /
    • 2022
  • In visual simultaneous localization and mapping (vSLAM), the correct recognition of a place benefits in relocalization and improved map accuracy. However, its performance is significantly affected by the environmental conditions such as variation in light, viewpoints, seasons, and presence of dynamic objects. This research addresses the problem of feature occlusion caused by interference of dynamic objects leading to the poor performance of visual place recognition algorithm. To overcome the aforementioned problem, this research analyzes the role of scene semantics in correct detection of a place in challenging environments and presents a semantics aided visual place recognition method. Semantics being invariant to viewpoint changes and dynamic environment can improve the overall performance of the place matching method. The proposed method is evaluated on the two benchmark datasets with dynamic environment and seasonal changes. Experimental results show the improved performance of the visual place recognition method for vSLAM.

Retinex-based Logarithm Transformation Method for Color Image Enhancement (컬러 이미지 화질 개선을 위한 Retinex 기반의 로그변환 기법)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.9-16
    • /
    • 2018
  • Images with lower illumination from the light source or with dark regions due to shadows, etc., can improve subjective image quality by using retinex-based image enhancement schemes. The retinex theory is a method that recognizes the relative lightness of a scene, rather than recognizing the brightness of the scene. The way the human visual system recognizes a scene in a specific position can be in one of several methods: single-scale retinex, multi-scale retinex, and multi-scale retinex with color restoration (MSRCR). The proposed method is based on the MSRCR method, which includes a color restoration step, which consists of three phases. In the first phase, the existing MSRCR method is applied. In the second phase, the dynamic range of the MSRCR output is adjusted according to its histogram. In the last phase, the proposed method transforms the retinex output value into the display dynamic range using a logarithm transformation function considering human visual system characteristics. Experimental results show that the proposed algorithm effectively increases the subjective image quality, not only in dark images but also in images including both bright and dark areas. Especially in a low lightness image, the proposed algorithm showed higher performance improvement than the conventional approaches.

MPEG-DASH Services for 3D Contents Based on DMB AF (DMB AF 기반 3D 콘텐츠의 MPEG-DASH 서비스)

  • Kim, Yong Han;Park, Minkyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.115-121
    • /
    • 2013
  • Recently an extension to DMB AF (Digital Multimedia Broadcasting Application Format) standard has been proposed in such a way that the extended DMB AF can include stereoscopic video and stereoscopic images for interactive service data, i.e., MPEG-4 BIFS (Binary Format for Scene) data, in addition to the existing 2D video and 2D images for BIFS services. In this paper we developed a service that provides the streaming of 3D contents in DMB AF by using MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard and validated it by implementing the client software.