• Title/Summary/Keyword: virtual camera

Search Result 477, Processing Time 0.022 seconds

Depth Map Using New Single Lens Stereo (단안렌즈 스테레오를 이용한 깊이 지도)

  • Changwun Ku;Junghee Jeon;Kim, Choongwon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1157-1163
    • /
    • 2000
  • In this paper, we present a novel and practical stereo vision system that uses only one camera and four mirrors placed in front of the camera. The equivalent of a stereo pair of images are formed as left and right halves of a single CCD image by using four mirrors placed in front of the ten of a CCD camera. An object arbitrary point in 3D space is transformed into two virtual points by the four mirrors. As in the conventional stereo system, the displacement between the two conjugate image points of the two virtual points is directly related to the depth of the object point. This system has the following advantages over traditional two camera stereo that identical system parameters, easy calibration and easy acquisition of stereo data.

  • PDF

An Efficient Navigation of Volume Dataset Using z-Buffer (z-버퍼를 이용한 효율적인 볼륨 데이터 항행기법)

  • Kim, Hwa-Jin;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.1
    • /
    • pp.29-35
    • /
    • 2002
  • In virtual endoscopy, it is important to produce high quality perspective images in real-time. However, it is more significant to devise a navigation method that can make a virtual camera move through in human cavities such as colon and bronchus without collision and let the user control the camera intuitively. We propose an efficient navigation method, which generates 2D depth map during rendering the current frame, then determines position and direction of camera using the depth information. It offers collision-free navigation and allows us to control the camera as we want. Also it does not require preprocessing step and additional data structures.

  • PDF

Performance Evaluation of ARCore Anchors According to Camera Tracking

  • Shinhyup Lee;Leehwan Hwang;Seunghyun Lee;Taewook Kim;Soonchul Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.215-222
    • /
    • 2023
  • Augmented reality (AR), which integrates virtual media into reality, is increasingly utilized across various industrial sectors, thanks to advancements in 3D graphics and mobile device technologies. The IT industry is thus carrying out active R&D activities about AR platforms. Google plays a significant role in the AR landscape, with a focus on ARCore services. An essential aspect of ARCore is the use of anchors, which serve as reference points that help maintain the position and orientation of virtual objects within the physical environment. However, if the accuracy of anchor positioning is suboptimal when running AR content, it can significantly diminish the user's immersive experience. We are to assess the performance of these anchors in this study. To conduct the performance evaluation, virtual 3D objects, matching the shape and size of real-world objects, we strategically positioned ourselves to overlap with their physical counterparts. Images of both real and virtual objects were captured from five distinct camera trajectories, and ARCore's performance was analyzed by examining the difference between these captured images.

Study of Image Production using Steadicam Effects for 3D Camera (3D 카메라 기반 스테디캠 효과를 적용한 영상제작에 관한연구)

  • Lee, Junsang;Park, Sungdae;Lee, Imgeun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3035-3041
    • /
    • 2014
  • The steadicam effects is widely used in production of the 3D animation for natural camera movement. Conventional method for steadicam effects is using keyframe animation technique, which is annoying and time consuming process. Furthermore it is difficult and unnatural to simulate camera movement in real world. In this paper we propose a novel method for representing steadicam effects on virtual camera of 3D animation. We modeled a camera of real world into Maya production tools, considering gravity, mass and elasticity. The model is implemented with Python language, which is directly applied to Maya platform as a filter module. The proposed method reduces production time and improves production environment. It also makes more natural and realistic footage to maximize visual effects.

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

A Development of The Remote Robot Control System with Virtual Reality Interface System (가상현실과 결합된 로봇제어 시스템의 구현방법)

  • 김우경;김훈표;현웅근
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.320-324
    • /
    • 2003
  • Recently, Virtual reality parts is applied in various fields of industry. In this paper we got under control motion of reality robot from interface manipulation in the virtual world. This paper created virtual robot using of 3D Graphic Tool. And we reappeared a similar image with reality robot put on texture the use of components of Direct 3D Graphic. Also a reality robot and a virtual robot is controlled by joystick. The developed robot consists of robot controller with vision system and host PC program. The robot and camera can move with 2 degree of freedom by independent remote controlling a user friendly designed joystick. An environment is recognized by the vision system and ultra sonic sensors. The visual mage and command data translated through 900MHz and 447MHz RF controller, respectively. If user send robot control command the use of simulator to control the reality robot, the transmitter/recever got under control until 500miter outdoor at the rate of 4800bps a second in Hlaf Duplex method via radio frequency module useing 447MHz frequency.

  • PDF

A Real-time Augmented Video System using Chroma-Pattern Tracking (색상패턴 추적을 이용한 실시간 증강영상 시스템)

  • 박성춘;남승진;오주현;박창섭
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.2-9
    • /
    • 2002
  • Recently. VR( Virtual Reality) applications such as virtual studio and virtual character are wifely used In TV programs. and AR( Augmented Reality) applications are also belong taken an interest increasingly. This paper introduces a virtual screen system. which Is a new AR application for broadcasting. The virtual screen system is a real-time video augmentation system by tracking a chroma-patterned moving panel. We haute recently developed a virtual screen system.'K-vision'. Our system enables the user to hold and morse a simple panel on which live video, pictures of 3D graphics images can appear. All the Images seen on the panel change In the correct perspective, according to movements of the camera and the user holding the panel, in real-time. For the purpose of tracking janet. we use some computer vision techniques such as blob analysis and feature tracking. K-vision can work well with any type of camera. requiring no special add-ons. And no need for sensor attachments to the panel. no calibration procedures required. We are using K-vision in some TV programs such as election. documentary and entertainment.

The Extraction of Camera Parameters using Projective Invariance for Virtual Studio (가상 스튜디오를 위한 카메라 파라메터의 추출)

  • Han, Seo-Won;Eom, Gyeong-Bae;Lee, Jun-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2540-2547
    • /
    • 1999
  • Chromakey method is one of key technologies for realizing virtual studio, and the blue portions of a captured image in virtual studio, are replaced with a computer generated or real image. The replaced image must be changed according to the camera parameter of studio for natural merging with the non-blue portions of a captured image. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns that are painted on a blue screen. We extract corresponding points between a blue screen. We extract corresponding points between a blue screen and a captured image using the projective invariant features of a pentagon. Then, calculate camera parameters using corresponding points by the modification of Tsai's method. Experimental results indicate that the proposed method is more accurate compared to conventional method and can process about twelve frames of video per a second in Pentium-MMX processor with CPU clock of 166MHz.

  • PDF

Implementation of Real-time Virtual Touch Recognition System in Embedded System (임베디드 환경에서 실시간 가상 터치 인식 시스템의 구현)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.10
    • /
    • pp.1759-1766
    • /
    • 2016
  • We can implement the virtual touch recognition system by mounting the virtual touch algorithm into an embedded device connected to a depth camera. Since the computing performance is limited in embedded system, the real-time processing of recognizing the virtual touch is difficult when the resolution of the depth image is large. In order to resolve the problem, this paper improves the algorithms of binarization and labeling that occupy a lot of time in all processing of virtual touch recognition. It processes the binarization and labeling in only necessary regions rather than all of the picture. By appling the proposed algorithm, the system can recognize the virtual touch in real-time as about 31ms per a frame in the depth image that has 640×480 resolution.

Study on Distortion and Field of View of Contents in VR HMD

  • Son, Hojun;Jeon, Hyoung joon;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.18-25
    • /
    • 2017
  • Recently, VR HMD (virtual reality head mounted display) has been utilized for virtual training, entertainment, vision therapy, and optometry. In particular, virtual reality contents are increasingly used for vision therapy and optometry. Accordingly, high-quality virtual reality contents such as a natural vision of life is required. Therefore, it is necessary to study the content production according to the optical characteristics of the VR HMD. The purpose of this paper is to suggest a proper FOV (field of view) of contents according to the distortion rate. We produced virtual reality contents and obtained distorted images by virtual camera. The distortion rate is calculated by using the distorted image. It is proved that the optimal FOV of the VR content with the minimum distortion is $90{\sim}100^{\circ}$. The results of this study are expected to be applied to the production of high quality contents.