• 제목/요약/키워드: Virtual camera

검색결과 479건 처리시간 0.025초

Construction of Virtual Images for a Benchmark Test of 3D-PTV Algorithms for Flows

  • Hwang, Tae-Gyu;Doh, Deog-Hee;Hong, Seong-Dae;Kenneth D. Kihm
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제28권8호
    • /
    • pp.1185-1194
    • /
    • 2004
  • Virtual images for PIV are produced for the construction of a benchmark test tool of PTV systems, Camera parameters obtained by an actual experiment are used to construct the virtual images, LES(Large Eddy Simulation) data sets of a channel flow are used for generation of the virtual images, Using the virtual images and the camera's parameters. three-dimensional velocity vectors are obtained for a channel flow. The capabilities of a 3D-PTV algorithm are investigated by comparing the results obtained by the virtual images and those by an actual measurement for the channel flow.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • 제41권2호
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

Implementation of Virtual Realily Immersion System using Motion Vectors (모션벡터를 이용한 가상현실 체험 시스템의 구현)

  • 서정만;정순기
    • Journal of the Korea Society of Computer and Information
    • /
    • 제8권3호
    • /
    • pp.87-93
    • /
    • 2003
  • The purpose of this research is to develop a virtual reality system which enables to actually experience the virtual reality through the visual sense of human. TSS was applied in tracing the movement of moving picture in this research. By applying TSS, it was possible to calculate multiple motion vectors from moving picture, and then camera's motion parameters were obtained by utilizing the relationship between the motion vectors. For the purpose of experiencing the virtual reality by synchronizing the camera's accelerated velocity and the simulator's movements, the relationship between the value of camera's accelerated velocity and the simulator's movements was analyzed and its result was applied to the neutral network training. It has been proved that the proposed virtual reality immersion system in this dissertation can dynamically control the movements of moving picture and can also operate the simulator quite similarly to the real movements of moving picture.

  • PDF

The effectiveness of HMD-based virtual environments through 3D camera for hotel room tour

  • Kim, Ki Han;Lee, Junsoo;Koo, Choongwan;Cha, Seung Hyun
    • International conference on construction engineering and project management
    • /
    • The 8th International Conference on Construction Engineering and Project Management
    • /
    • pp.117-121
    • /
    • 2020
  • Many of hotel customers obtain information from hotel websites to find the best alternative. One of the crucial information for the choice is spatial/visual information of hotel rooms. However, hotel website provides photographs only showing representative room features that may not be sufficient to give a full understanding of hotel room to customers. HMD-based 3D virtual environments (HVE) created by 3D camera could improve customers' experiences of hotel rooms by providing full virtual tours of hotel rooms. However, to the best of our knowledge, whether HVE can adequately provide similar customers' perception on spatial/visual information remains unproven as physical hotel rooms. The present study thus aims to verify how similar and reliable information on physical hotel room HVE provides to hotel customers in comparison with hotel website with 2D photograph and display-based 3D virtual environment. For this purpose, this study conducted a comparative experiment to investigate perception of three environments. As a result, the study found that HVE is more effective to provide spatial/visual information as similar as an actual hotel room. In addition, HVE increases customers' perceptions towards the reliability of information, the quality of hotel room and intention to book.

  • PDF

Verification of Camera-Image-Based Target-Tracking Algorithm for Mobile Surveillance Robot Using Virtual Simulation (가상 시뮬레이션을 이용한 기동형 경계 로봇의 영상 기반 목표추적 알고리즘 검증)

  • Lee, Dong-Youm;Seo, Bong-Cheol;Kim, Sung-Soo;Park, Sung-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • 제36권11호
    • /
    • pp.1463-1471
    • /
    • 2012
  • In this study, a 3-axis camera system design is proposed for application to an existing 2-axis surveillance robot. A camera-image-based target-tracking algorithm for this robot has also been proposed. The algorithm has been validated using a virtual simulation. In the algorithm, the heading direction vector of the camera system in the mobile surveillance robot is obtained by the position error between the center of the view finder and the center of the object in the camera image. By using the heading direction vector of the camera system, the desired pan and tilt angles for target-tracking and the desired roll angle for the stabilization of the camera image are obtained through inverse kinematics. The algorithm has been validated using a virtual simulation model based on MATLAB and ADAMS by checking the corresponding movement of the robot to the target motion and the virtual image error of the view finder.

A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study (IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구)

  • Ryu, Hong-Nam;Kim, Jong-Hun;Yoo, Gyeong-Mo;Hong, Ju-Yeong;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • 제29권11호
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

Optimal Camera Placement Leaning of Multiple Cameras for 3D Environment Reconstruction (3차원 환경 복원을 위한 다수 카메라 최적 배치 학습 기법)

  • Kim, Ju-hwan;Jo, Dongsik
    • Smart Media Journal
    • /
    • 제11권9호
    • /
    • pp.75-80
    • /
    • 2022
  • Recently, research and development on immersive virtual reality(VR) technology to provide a realistic experience is being widely conducted. To provide realistic experience in immersive virtual reality for VR participants, virtual environments should consist of high-realistic environments using 3D reconstruction. In this paper, to acquire 3D information in real space using multiple cameras in the reconstruction process, we propose a novel method of optimal camera placement for accurate reconstruction to minimize distortion of 3D information. Through our approach in this paper, real 3D information can obtain with minimized errors during environment reconstruction, and it is possible to provide a more immersive experience with the created virtual environment.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • 제16권3호
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • 제24권11호
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.

Development of a Real-time Sensor-based Virtual Imaging System (센서기반 실시간 가상이미징 시스템의 구현)

  • 남승진;오주현;박성춘
    • Journal of Broadcast Engineering
    • /
    • 제8권1호
    • /
    • pp.63-71
    • /
    • 2003
  • In sport programs, real-time virtual imaging system come into notice for new technology which can compose information like team logos, scores. distances directly on playing ground, so it can compensate for the defects of general character generator. In order to synchronize graphics to camera movements, generally two method is used. One is for using sensors attached to camera moving axis and the other is for analyzing camera video itself. KBS technical research institute developed real-time sensor-based virtual imaging system 'VIVA', which uses four sensors on pan, tilt, zoom, focus axis and controls virtual graphic camera in three dimensional coordinates in real-time. In this paper, we introduce our system 'VIVA' and it's technology. For accurate camera tracking we calculated view-point movement occurred by zooming based on optical principal point variation data and we considered field of view variation not only by zoom but also by focus. We developed our system based on three dimensional graphic environment. so many useful three dimensional graphic techniques such as keyframe animation can be used. VIVA was successfully used both in Busan Asian Games and 2002 presidential election. We confirmed that it can be used not only in the field but also in the studio programs in which camera is used within more close range.