• 제목/요약/키워드: Virtual Cameras

검색결과 111건 처리시간 0.029초

A Movement Instruction System Using Virtual Environment

  • Hatayama, Junichi;Murakoshi, Hideki;Yamaguchi, Toru
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.70-73
    • /
    • 2003
  • This paper proposes a movement instruction system using virtual environment. This system consists of a monitor, cameras, ana a PC. A learner is coached by a virtual instructor that is displayed in virtual environment as 3 dimensional computer graphics on the monitor. Virtual instructor shows sample movement and suggests mistakes of learner's movement by recognizing movement of learner's movement from the picture that cameras capture. To improve the robust characteristic of information from cameras, the system enables to select optimum inputs from cameras based on learner's movement It implemented by Fuzzy associative inference system Fuzzy associative inference system is implemented by bi-directional associative memory and fuzzy rules. It is suitable to convert obscure information into clear. We implement and evaluate the movement instruction system

  • PDF

IP 카메라의 VIDEO ANALYTIC 최적 활용을 위한 가상환경 구축 및 유용성 분석 연구 (A Virtual Environment for Optimal use of Video Analytic of IP Cameras and Feasibility Study)

  • 류홍남;김종훈;류경모;홍주영;최병욱
    • 조명전기설비학회논문지
    • /
    • 제29권11호
    • /
    • pp.96-101
    • /
    • 2015
  • In recent years, researches regarding optimal placement of CCTV(Closed-circuit Television) cameras via architecture modeling has been conducted. However, for analyzing surveillance coverage through actual human movement, the application of VA(Video Analytics) function of IP(Internet Protocol) cameras has not been studied. This paper compares two methods using data captured from real-world cameras and data acquired from a virtual environment. In using real cameras, we develop GUI(Graphical User Interface) to be used as a logfile which is stored hourly and daily through VA functions and to be used commercially for placement of products inside a shop. The virtual environment was constructed to emulate an real world such as the building structure and the camera with its specifications. Moreover, suitable placement of the camera is done by recognizing obstacles and the number of people counted within the camera's range of view. This research aims to solve time and economic constraints of actual installation of surveillance cameras in real-world environment and to do feasibility study of virtual environment.

스테레오 PIV (Stereoscopic PIV)

  • 도덕희;이원제;조경래;편용범;김동혁
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 추계학술대회논문집B
    • /
    • pp.394-399
    • /
    • 2001
  • A new stereoscopic PIV is introduced. The system works with CCD cameras, stereoscopic photogrammetry, and a 3D-PTV principle. Virtual images are produced for the construction of a benchmark testing tool of PIV techniques. The arrangement of the two cameras is based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors are based on 3D-PTV technique.

  • PDF

평면 및 수직 운동하는 카메라에서 유용한 영상 역투영 속성들 (Useful Image Back-projection Properties in Cameras under Planar and Vertical Motion)

  • 김민환;변성민
    • 한국멀티미디어학회논문지
    • /
    • 제25권7호
    • /
    • pp.912-921
    • /
    • 2022
  • Autonomous vehicles equipped with cameras, such as robots, fork lifts, or cars, can be found frequently in industry sites or usual life. Those cameras show planar motion because the vehicles usually move on a plane. Sometimes the cameras in fork lifts moves vertically. The cameras under planar and vertical motion provides useful properties for horizontal or vertical lines that can be found easily and frequently in our daily life. In this paper, some useful back-projection properties are suggested, which can be applied to horizontal or vertical line images captured by a camera under planar and vertical motion. The line images are back-projected onto a virtual plane that is parallel to the planar motion plane and has the same orientation at the camera coordinate system regardless of camera motion. The back-projected lines on the virtual plane provide useful information for the world lines corresponding to the back-projected lines, such as line direction, angle between two horizontal lines, length ratio of two horizontal lines, and vertical line direction. Through experiments with simple plane polygons, we found that the back-projection properties were useful for estimating correctly the direction and the angle for horizontal and vertical lines.

스테레오 PTV법의 개발 (Development of a Stereoscopic PTV)

  • 도덕희;이원제;조용범;편용범
    • 한국가시화정보학회지
    • /
    • 제1권1호
    • /
    • pp.92-97
    • /
    • 2003
  • A new Stereoscopic PTV was developed using two CCD cameras, stereoscopic photogrammetry based on a 3D-PTV principle. Virtual images were produced for the benchmark test of the constructed Stereoscopic PTV technique. The arrangement of the two cameras was based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors were based on the Genetic Algorithm based 3D-PTV technique. The constructed Stereoscopic PTV technique was tested on the standard images of the impinged jet proposed by VSJ. The results obtained by the constructed system showed good agreements with the original data.

  • PDF

WALK-THROUGH VIEW FOR FTV WITH CIRCULAR CAMERA SETUP

  • Uemori, Takeshi;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.727-731
    • /
    • 2009
  • In this paper, we propose a method to generate a free viewpoint image using multi-viewpoint images which are taken by cameras arranged circularly. In past times, we have proposed the method to generate a free viewpoint image based on Ray-Space method. However, with that method, we can not generate a walk-through view seen from a virtual viewpoint among objects. The method we propose in this paper realizes the generation of such view. Our method gets information of the positions of objects using shape from silhouette method at first, and selects appropriate cameras which acquired rays needed for generating a virtual image. A free viewpoint image can be generated by collecting rays which pass over the focal point of a virtual camera. However, when the requested ray is not available, it is necessary to interpolate it from neighboring rays. Therefore, we estimate the depth of the objects from a virtual camera and interpolate ray information to generate the image. In the experiments with the virtual sequences which were captured at every 6 degrees, we set the virtual camera at user's choice and generated the image from that viewpoint successfully.

  • PDF

탠저블 증강현실을 활용한 개인용 가상스튜디오 저작 (Authoring Personal Virtual Studio Using Tangible Augmented Reality)

  • 이규원;이재열;남지승;홍성훈
    • 한국CDE학회논문집
    • /
    • 제13권2호
    • /
    • pp.77-88
    • /
    • 2008
  • Nowadays personal users create a variety of multi-media contents and share them with others through various devices over the Internet since the concept of user created content (UCC) has been widely accepted as a new paradigm in today's multi-media market, which has broken the boundary of contents providers and consumers. This paradigm shift has also introduced a new business model that makes it possible for them to create their own multi-media contents for commercial purpose. This paper proposes a tangible virtual studio using augmented reality to author multi-media contents easily and intuitively for personal broadcasting and personal content generation. It provides a set of tangible interfaces and devices such as visual markers, cameras, movable and rotatable arms carrying cameras, and miniaturized set. They can offer an easy-to-use interface in an immersive environment and an easy switching mechanism between tangible environment and virtual environment. This paper also discusses how to remove inconsistency between real objects and virtual objects during the AR-enabled visualization with a context-adaptable tracking method. The context-adaptable tracking method not only adjusts the locations of invisible markers by interpolating the locations of existing reference markers, but also removes a jumping effect of movable virtual objects when their references are changed from one marker to another.

3D컴퓨터그래픽스 가상현실 애니메이션 카메라와 실제카메라의 비교 연구 - Maya, Softimage 3D, XSI 소프트웨어와 실제 정사진과 동사진 카메라를 중심으로 (A study on comparison between 3D computer graphics cameras and actual cameras)

  • 강종진
    • 만화애니메이션 연구
    • /
    • 통권6호
    • /
    • pp.193-220
    • /
    • 2002
  • The world being made by computers showing great expanses and complex and various expression provides not simply communication places but also a new civilization and a new creative world. Among these, 3D computer graphics, 3D animation and virtual reality technology wore sublimated as a new culture and a new genre of art by joining graphic design and computer engineering. In this study, I tried to make a diagnosis of possibilities, limits and differences of expression in the area of virtual reality computer graphics animation as a comparison between camera action, angle of actual still camera and film camera and virtual software for 3D computer graphics software - Maya, XSI, Softimage3D.

  • PDF

다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법 (Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras)

  • 옥수열;이석환
    • 한국멀티미디어학회논문지
    • /
    • 제24권4호
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

실제와 합성영상의 비교에 의한 비디오 아바타의 정합 (Registration of Video Avatar by Comparing Real and Synthetic Images)

  • 박문호;고희동;변혜란
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제33권8호
    • /
    • pp.477-485
    • /
    • 2006
  • 본 논문에서는 가상 환경의 참여자를 표현하기 위해 실제 참여자의 영상을 가상 환경에 실시간으로 제공하는 비디오 아바타를 사용하였다. 비디오 아바타의 사용은 참여자를 표현하는 정밀도를 높일 수 있지만 정확한 정합이 중요한 이슈가 된다. 비디오 아바타의 정합을 위해 실제 환경에서 사용되는 카메라와 가상 환경을 생성하기 위해 사용되는 가상 카메라의 특성을 동일하게 조정하였다. 조정된 실제와 가상 카메라의 유사성에 근거하여 실제와 합성 영상의 비교를 통하여 실제 환경에서 획득된 비디오 아바타가 가상 환경과 정합되도록 하였다. 비디오 아바타의 정합 과정에서는 정합의 부정확한 정도를 에너지로 표현하여 이를 최소화시키는 방법을 이용하였으며 실험을 통하여 제안된 방법이 가상 환경에서 비디오 아바타의 정합에 효과적으로 적용 가능함을 확인하였다.