• 제목/요약/키워드: Virtual Cameras

검색결과 111건 처리시간 0.027초

Image Based 3D Reconstruction of Texture-less Objects for VR Contents

  • Hafeez, Jahanzeb;Lee, Seunghyun;Kwon, Soonchul;Hamacher, Alaric
    • International journal of advanced smart convergence
    • /
    • 제6권1호
    • /
    • pp.9-17
    • /
    • 2017
  • Recent development in virtual and augmented reality increases the demand for content in many different fields. One of the fast ways to create content for VR is 3D modeling of real objects. In this paper we propose a system to reconstruct three-dimensional models of real objects from the set of two-dimensional images under the assumption that the subject does not has distinct features. We explicitly consider an object that is made of one or more surfaces and radiant constant energy isotropically. We design a low cost portable multi camera rig system that is capable of capturing images simultaneously from all cameras. In order to evaluate the performance of the proposed system, comparison is made between 3D model and a CAD model. A simple algorithm is also proposed to acquire original texture or color of the subject. Using best pattern found after the experiments, 3D model of the Pyeongchang Olympic Mascot "Soohorang" is created to use as VR content.

Stereo Video-See-Through를 위한 버튼형 인터페이스 (The User Interface of Button Type for Stereo Video-See-Through)

  • 최영주;서용덕
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제13권2호
    • /
    • pp.47-54
    • /
    • 2007
  • 본 논문은 두 대의 카메라로부터 들어온 영상을 보여주는 see-through 장치의 사용 환경하에서 일반적인 사용자도 쉽고 편리하게 컴퓨터 시스템 또는 여러 프로세스들을 제어할 수 있는 사용자 인터페이스를 제안한다. 이를 위해 AR기술을 접목하여 영상이 보이는 화면에 가상의 버튼들을 합성하였으며, 화면상에 보이는 손의 위치를 추적하여 손가락에 의한 버튼의 선택 유무를 판단하고 각 상황에 따른 버튼의 색상 변경을 통해 결과를 나타내었다. 사용자는 단순히 화면을 보며 공중에서 손가락을 움직여 버튼을 선택함으로써 관련 작업을 수행 할 수 있다.

  • PDF

A Study on Shamanistic Expression Method of Performances Using VR Technology: Body Ownership and Gaze

  • Kim, Tae-Eun
    • International journal of advanced smart convergence
    • /
    • 제7권2호
    • /
    • pp.135-142
    • /
    • 2018
  • Virtual reality (VR) technology has been increasingly more frequently used day by day in industries, entertainment and performances due to the development of AR and MR technologies. Performance arts also actively utilize $360^{\circ}$ VR technology due to the free expression of stage settings and auditoriums. However, technologies for systems in which performers wear VR devices firsthand rather than being in the sandpoint of bystanders while audiences wear VR head mounted displays(HMDs) to see performance stages have been rarely studied yet. This study investigated the technical possibilities of possible methods of expression that will enable performers to appear on the stage wearing VR devices. Since VR can maximize the sense of immersion with its closed HMD structure unlike augmented reality (AR), VR was judged to be suitable for studies centered on the mental interactions in the inner side of humans. Among them, to implement shamanistic expression methods with the phantoms of the body and soul, a motion capture technology linked with VR display devices and real-time cameras was realized on the stage. In this process, the importance of body ownership experienced by the performers (participants), reactions when they lost it, and the mental phenomena of the desire to possess the subjects of gaze could be seen. In addition, high possibility of development of this technology hereafter could be expected because this technology includes the technical openness that enables the audience to appear on the stage firsthand to become performers.

가상 로봇 교육 시스템 설계 및 구현 (Design and Implementation of a Virtual Robot Education System)

  • 웅홍우;소원호
    • 전자공학회논문지CI
    • /
    • 제48권1호
    • /
    • pp.108-115
    • /
    • 2011
  • 본 논문에서는 레고 마인드스톰 NXT 로봇을 이용한 프로그래밍 교육을 위한 가상 로봇 교육 시스템 (VRES; Virtual Robot Education System)을 설계하고 구현한다. 제안된 시스템을 통하여 프로그램 학습자는 소스 코드를 편집, 컴파일, 그리고 로봇에 다운로드하여 자신의 실행 코드를 동작시킨다. 로봇을 관찰하기 위하여 시스템은 웹 카메라를 포함하고 있어 모니터링 서비스를 제공한다. 따라서 학생들은 자신의 프로그램을 다운로드한 로봇의 동작을 자세하게 검증할 수 있으며 필요시 디버깅 할 수 있다. 추가로 간단한 사용자 친화적 프로그래밍 언어와 이에 대한 컴파일러를 설계한다. 이러한 도구를 이용하여 학습자는 자바 언어보다 쉽게 NXT 로봇 프로그램을 생성하여 테스트할 수 있다. 교수자는 시스템에서 제공하는 직접 제어 모드를 이용하여 수업 주제를 위한 로봇의 제어와 관리가 가능하다. 그럼으로. 제안된 시스템은 학생들이 정규 수업 또는 방과 후에 인터넷과 웹브라우저를 사용하여 로봇 프로그래밍을 학습할 수 있도록 지원할 수 있다.

그래픽스 하드웨어를 이용한 입체 스윕 경계 근사 (Approximating 3D General Sweep Boundary using Graphics Hardware)

  • 안재우;김명수;홍성제
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제8권3호
    • /
    • pp.1-7
    • /
    • 2002
  • This paper presents a practical technique for approximating the boundary surface of the volume swept out by three-dimensional objects using the depth-buffer. Objects may change their geometries and orientations while sweeping. The sweep volume is approximated as a union of volume elements, which are just rendered inside appropriate viewing frusta of virtual cameras and mapped into screen viewports with depth-buffer. From the depth of each pixel in the screen space of each rendering, the corresponding point in the original world space can be computed. Appropriately connecting these points yields polygonal faces forming polygonal surface patches approximately covering some portion of the sweep volume. Each view frustum adds one or more surface patches in this way, and these presumably overlapped polygonal surface patches approximately enclose the whole sweep volume. These patches may further be processed to yield non-overlapped polygonal surfaces as an approximation to the boundary of the original 3D sweep volume.

  • PDF

Analysis of Stability on Single-leg Standing by Wearing a Head Mounted Display

  • Woo, Byung Hoon
    • 한국운동역학회지
    • /
    • 제27권2호
    • /
    • pp.149-155
    • /
    • 2017
  • Objective: The purpose of this study was to investigate the effects of three visual conditions (eyes opened, eyes closed, and wearing of a head mounted display [HMD]) on single-leg standing through kinematics and kinetic analysis. Method: Twelve college students (age: $24.5{\pm}2.6years$, height: $175.0{\pm}6.4cm$, weight: $69.2{\pm}5.1kg$) participated in this study. The study method adopted three-dimensional analysis with six cameras and ground reaction force measurement with one force plate. The analysis variables were coefficient of variation (CV) of the center of body mass, head movement, ground reaction force, and center of pressure, which were analyzed using one-way analysis of variance with repeated measures according to visual conditions. Results: In most cases, the results of this study showed that the CV was significantly higher in the order of HMD wearing, eyes closed, and eyes opened conditions. Conclusion: Our results indicated that body sway was the largest in the HMD wearing condition, and the risk of falling was high owing to the low stability.

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1997년도 Proceedings International Workshop on New Video Media Technology
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

6개 카메라로 촬영한 360VR영상의 실시간 스트리밍을 위한 이미지 변형 기법 (Image transform method of 360VR image acquired from six cameras for real-time streaming video)

  • 서봉석;정은영;김남태;장정엽;유동호;김동호
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2016년도 추계학술대회
    • /
    • pp.51-52
    • /
    • 2016
  • 최근 360 및 VR(Virtual Reality) 영상의 구성 및 전송을 위해 Facebook이 기존 메르카토르도법을 사용하는 것 대신 "Transform"의 방법을 제시하여 큐브 및 피라미드 형태로의 이미지 전송하는 방식을 발표했다. 본 논문은 이 변형기법을 기반으로, 6개의 카메라로 360VR영상을 촬영할 경우 "Transform"이 가지는 큐브 형태를 이용 기존보다 효과적이고 가벼워 실시간 스트리밍에 적합한 360VR이미지 변형법을 제안한다.

  • PDF

네트워크 카메라를 이용한 물체 감시와 비정상행위 판단 (Object Surveillance and Unusual-behavior Judgment using Network Camera)

  • 김진규;주영훈
    • 전기학회논문지
    • /
    • 제61권1호
    • /
    • pp.125-129
    • /
    • 2012
  • In this paper, we propose an intelligent method to surveil moving objects and to judge an unusual-behavior by using network cameras. To surveil moving objects, the Scale Invariant Feature Transform (SIFT) algorithm is used to characterize the feature information of objects. To judge unusual-behaviors, the virtual human skeleton is used to extract the feature points of a human in input images. In this procedure, the Principal Component Analysis (PCA) improves the accuracy of the feature vector and the fuzzy classifier provides the judgement principle of unusual-behaviors. Finally, the experiment results show the effectiveness and the feasibility of the proposed method.