• Title/Summary/Keyword: Virtual Cameras

Search Result 109, Processing Time 0.023 seconds

A Movement Instruction System Using Virtual Environment

  • Hatayama, Junichi;Murakoshi, Hideki;Yamaguchi, Toru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.70-73
    • /
    • 2003
  • This paper proposes a movement instruction system using virtual environment. This system consists of a monitor, cameras, ana a PC. A learner is coached by a virtual instructor that is displayed in virtual environment as 3 dimensional computer graphics on the monitor. Virtual instructor shows sample movement and suggests mistakes of learner's movement by recognizing movement of learner's movement from the picture that cameras capture. To improve the robust characteristic of information from cameras, the system enables to select optimum inputs from cameras based on learner's movement It implemented by Fuzzy associative inference system Fuzzy associative inference system is implemented by bi-directional associative memory and fuzzy rules. It is suitable to convert obscure information into clear. We implement and evaluate the movement instruction system

  • PDF

A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study (IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구)

  • Ryu, Hong-Nam;Kim, Jong-Hun;Yoo, Gyeong-Mo;Hong, Ju-Yeong;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.29 no.11
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

Stereoscopic PIV (스테레오 PIV)

  • Doh, D.H.;Lee, W.J.;Cho, G.R.;Pyun, Y.B.;Kim, D.H.
    • Proceedings of the KSME Conference
    • /
    • 2001.11b
    • /
    • pp.394-399
    • /
    • 2001
  • A new stereoscopic PIV is introduced. The system works with CCD cameras, stereoscopic photogrammetry, and a 3D-PTV principle. Virtual images are produced for the construction of a benchmark testing tool of PIV techniques. The arrangement of the two cameras is based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors are based on 3D-PTV technique.

  • PDF

Useful Image Back-projection Properties in Cameras under Planar and Vertical Motion (평면 및 수직 운동하는 카메라에서 유용한 영상 역투영 속성들)

  • Kim, Minhwan;Byun, Sungmin
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.912-921
    • /
    • 2022
  • Autonomous vehicles equipped with cameras, such as robots, fork lifts, or cars, can be found frequently in industry sites or usual life. Those cameras show planar motion because the vehicles usually move on a plane. Sometimes the cameras in fork lifts moves vertically. The cameras under planar and vertical motion provides useful properties for horizontal or vertical lines that can be found easily and frequently in our daily life. In this paper, some useful back-projection properties are suggested, which can be applied to horizontal or vertical line images captured by a camera under planar and vertical motion. The line images are back-projected onto a virtual plane that is parallel to the planar motion plane and has the same orientation at the camera coordinate system regardless of camera motion. The back-projected lines on the virtual plane provide useful information for the world lines corresponding to the back-projected lines, such as line direction, angle between two horizontal lines, length ratio of two horizontal lines, and vertical line direction. Through experiments with simple plane polygons, we found that the back-projection properties were useful for estimating correctly the direction and the angle for horizontal and vertical lines.

Development of a Stereoscopic PTV (스테레오 PTV법의 개발)

  • Doh Deog Hee;Lee Won Je;Cho Yong Beom;Pyeon Yong Beom
    • Journal of the Korean Society of Visualization
    • /
    • v.1 no.1
    • /
    • pp.92-97
    • /
    • 2003
  • A new Stereoscopic PTV was developed using two CCD cameras, stereoscopic photogrammetry based on a 3D-PTV principle. Virtual images were produced for the benchmark test of the constructed Stereoscopic PTV technique. The arrangement of the two cameras was based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors were based on the Genetic Algorithm based 3D-PTV technique. The constructed Stereoscopic PTV technique was tested on the standard images of the impinged jet proposed by VSJ. The results obtained by the constructed system showed good agreements with the original data.

  • PDF

WALK-THROUGH VIEW FOR FTV WITH CIRCULAR CAMERA SETUP

  • Uemori, Takeshi;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.727-731
    • /
    • 2009
  • In this paper, we propose a method to generate a free viewpoint image using multi-viewpoint images which are taken by cameras arranged circularly. In past times, we have proposed the method to generate a free viewpoint image based on Ray-Space method. However, with that method, we can not generate a walk-through view seen from a virtual viewpoint among objects. The method we propose in this paper realizes the generation of such view. Our method gets information of the positions of objects using shape from silhouette method at first, and selects appropriate cameras which acquired rays needed for generating a virtual image. A free viewpoint image can be generated by collecting rays which pass over the focal point of a virtual camera. However, when the requested ray is not available, it is necessary to interpolate it from neighboring rays. Therefore, we estimate the depth of the objects from a virtual camera and interpolate ray information to generate the image. In the experiments with the virtual sequences which were captured at every 6 degrees, we set the virtual camera at user's choice and generated the image from that viewpoint successfully.

  • PDF

Authoring Personal Virtual Studio Using Tangible Augmented Reality (탠저블 증강현실을 활용한 개인용 가상스튜디오 저작)

  • Rhee, Gue-Won;Lee, Jae-Yeol;Nam, Ji-Seung;Hong, Sung-Hoon
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.2
    • /
    • pp.77-88
    • /
    • 2008
  • Nowadays personal users create a variety of multi-media contents and share them with others through various devices over the Internet since the concept of user created content (UCC) has been widely accepted as a new paradigm in today's multi-media market, which has broken the boundary of contents providers and consumers. This paradigm shift has also introduced a new business model that makes it possible for them to create their own multi-media contents for commercial purpose. This paper proposes a tangible virtual studio using augmented reality to author multi-media contents easily and intuitively for personal broadcasting and personal content generation. It provides a set of tangible interfaces and devices such as visual markers, cameras, movable and rotatable arms carrying cameras, and miniaturized set. They can offer an easy-to-use interface in an immersive environment and an easy switching mechanism between tangible environment and virtual environment. This paper also discusses how to remove inconsistency between real objects and virtual objects during the AR-enabled visualization with a context-adaptable tracking method. The context-adaptable tracking method not only adjusts the locations of invisible markers by interpolating the locations of existing reference markers, but also removes a jumping effect of movable virtual objects when their references are changed from one marker to another.

A study on comparison between 3D computer graphics cameras and actual cameras (3D컴퓨터그래픽스 가상현실 애니메이션 카메라와 실제카메라의 비교 연구 - Maya, Softimage 3D, XSI 소프트웨어와 실제 정사진과 동사진 카메라를 중심으로)

  • Kang, Chong-Jin
    • Cartoon and Animation Studies
    • /
    • s.6
    • /
    • pp.193-220
    • /
    • 2002
  • The world being made by computers showing great expanses and complex and various expression provides not simply communication places but also a new civilization and a new creative world. Among these, 3D computer graphics, 3D animation and virtual reality technology wore sublimated as a new culture and a new genre of art by joining graphic design and computer engineering. In this study, I tried to make a diagnosis of possibilities, limits and differences of expression in the area of virtual reality computer graphics animation as a comparison between camera action, angle of actual still camera and film camera and virtual software for 3D computer graphics software - Maya, XSI, Softimage3D.

  • PDF

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Registration of Video Avatar by Comparing Real and Synthetic Images (실제와 합성영상의 비교에 의한 비디오 아바타의 정합)

  • Park Moon-Ho;Ko Hee-Dong;Byun Hye-Ran
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.477-485
    • /
    • 2006
  • In this paper, video avatar, made from live video streams captured from a real participant, was used to represent a virtual participant. By using video avatar to represent participants, the sense of reality for participants can be increased, but the correct registration is also an important issue. We configured the real and virtual cameras to have the same characteristics in order to register the video avatar. Comparing real and synthetic images, which is possible because of the similarities between real and virtual cameras, resolved registration between video avatar captured from real environment and virtual environment. The degree of incorrect registration was represented as energy, and the energy was then minimized to produce seamless registration. Experimental results show the proposed method can be used effectively for registration of video avatar.