• Title/Summary/Keyword: virtual digital camera image

Search Result 43, Processing Time 0.027 seconds

A Study on Correcting Virtual Camera Tracking Data for Digital Compositing (디지털영상 합성을 위한 가상카메라의 트래킹 데이터 보정에 관한 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.39-46
    • /
    • 2012
  • The development of the computer widens the expressive ways for the nature objects and the scenes. The cutting edge computer graphics technologies effectively create any images we can imagine. Although the computer graphics plays an important role in filming and video production, the status of the domestic contents production industry is not favorable for producing and research all at the same time. In digital composition, the match moving stage, which composites the captured real sequence with computer graphics image, goes through many complicating processes. The camera tracking process is the most important issue in this stage. This comprises the estimation of the 3D trajectory and the optical parameter of the real camera. Because the estimating process is based only on the captured sequence, there are many errors which make the process more difficult. In this paper we propose the method for correcting the tracking data. The proposed method can alleviate the unwanted camera shaking and object bouncing effect in the composited scene.

A Real-time Plane Estimation in Virtual Reality Using a RGB-D Camera in Indoors (RGB-D 카메라를 이용한 실시간 가상 현실 평면 추정)

  • Yi, Chuho;Cho, Jungwon
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.319-324
    • /
    • 2016
  • In the case of robot and Argument Reality applications using a camera in environments, a technology to estimate planes is a very important technology. A RGB-D camera can get a three-dimensional measurement data even in a flat which has no information of the texture of the plane;, however, there is an enormous amount of computation in order to process the point-cloud data of the image. Furthermore, it could not know the number of planes that are currently observed as an advance, also, there is an additional operation required to estimate a three dimensional plane. In this paper, we proposed the real-time method that decides the number of planes automatically and estimates the three dimensional plane by using the continuous data of an RGB-D camera. As experimental results, the proposed method showed an improvement of approximately 22 times faster speed compared to processing the entire data.

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

Study on the Visual Characteristics and Subjectivity in the Live Action Based Virtual Reality (실사기반 가상현실 영상의 특징과 주체 구성에 대한 연구)

  • Jeon, Gyongran
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.117-139
    • /
    • 2017
  • The possibility of interactivity of digital media environment is adopted in human expression system and integrates the dynamic aspect of digital technology with expressive structure, thereby transforming the paradigm of image acceptance as well as image expression range. Virtual reality images have an important meaning in that they are changing the one-way mechanism of production and acceptance of images that lead to producers-video-audiences beyond the problem of verisimilitude such as how vividly they simulate reality. First of all, the virtual reality image is not one-sided but interactive image composed by the user. Viewing a virtual reality image does not just see the camera shine, but it gets the same view as in the real world. Therefore, the image that was controlled through framing changes to be configured positively by the user. This implies a change in the paradigm of image acceptance as well as a change in the existing form of the image itself. In addition, the narrative structure of the image and the subjects that are formed in the process are also required to be discussed. In the virtual reality image, the user 's gaze is a fusion of the gaze inside the image and the gaze outside the image. This is because the position of the user as the subject of the gaze in the virtual reality image is continuously restricted by the device of the discourse such as the editing and the narration of the shot. The significance of the virtual reality image is not aesthetically perfect but it is reconstructed according to the user to reflect the existence of the user positively and engage the user in the image.

A Study on the Production Efficiency of Movie Filming Environment Using 360° VR (360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구)

  • Lee, Young-suk;Kim, Jungwhan
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

Acquisition of HDR image using estimation of scenic dynamic range in images with various exposures (다중 노출 복수 영상에서 장면의 다이내믹 레인지 추정을 통한 HDR 영상 획득)

  • Park, Dae-Geun;Park, Kee-Hyon;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.10-20
    • /
    • 2008
  • Generally, to acquire an HDR image, many images that cover the entire dynamic range of the scene with different exposure times are required, then these images are fused into one HDR image. This paper proposes an efficient method for the HDR image acquisition with small number of images. First, we estimated scenic dynamic range using two images with different exposure times. These two images contain the upper and lower limit of the scenic dynamic range. Independently of the scene, according to varied exposure times, similar characteristics for both the maximum gray levels in images that include the upper limit and the minimum gray levels in images that include the lower limit are identified. After modeling these characteristics, the scenic dynamic range is estimated using the modeling results. This estimated scenic dynamic range is then used to select the proper exposure times for the acquisition of an HDR image. We selected only three proper exposure times because entire dynamic range of the cameras could be covered by three dynamic range of the cameras with different exposure times. To evaluate the error of the HDR image, experiments using virtual digital camera images were carried out. For several test images, the error of the HDR image using proposed method was comparable to that of the HDR image which utilize more than ten images for the HDR image acquisition.

Implementation of Embedded System Based Simulator Controller Using Camera Motion Parameter Extractor (카메라 모션 벡터 추출기를 이용한 임베디드 기반 가상현실 시뮬레이터 제어기의 설계)

  • Lee Hee-Man;Park Sang-Jo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.4
    • /
    • pp.98-108
    • /
    • 2006
  • In the past, the Image processing system is independently implemented and has a limit in its application to a degree of simple display. The scope of present image processing system is diversely extended in its application owing to the development of image processing IC chips. In this paper, we implement the image processing system operated independently without PC by converting analogue image signals into digital signals. In the proposed image processing system, we extract the motion parameters from analogue image signals and generate the virtual movement to Simulator and operate Simulator by extracting motion parameters.

  • PDF

Study of Image Production using Steadicam Effects for 3D Camera (3D 카메라 기반 스테디캠 효과를 적용한 영상제작에 관한연구)

  • Lee, Junsang;Park, Sungdae;Lee, Imgeun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3035-3041
    • /
    • 2014
  • The steadicam effects is widely used in production of the 3D animation for natural camera movement. Conventional method for steadicam effects is using keyframe animation technique, which is annoying and time consuming process. Furthermore it is difficult and unnatural to simulate camera movement in real world. In this paper we propose a novel method for representing steadicam effects on virtual camera of 3D animation. We modeled a camera of real world into Maya production tools, considering gravity, mass and elasticity. The model is implemented with Python language, which is directly applied to Maya platform as a filter module. The proposed method reduces production time and improves production environment. It also makes more natural and realistic footage to maximize visual effects.