• Title/Summary/Keyword: Virtual camera

Search Result 478, Processing Time 0.024 seconds

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

Design and Development of Virtual Reality Exergame using Smart mat and Camera Sensor (스마트매트와 카메라 센서를 이용한 가상현실 체험형 운동게임 시스템 설계 및 구현)

  • Seo, Duck Hee;Park, Kyung Shin;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2297-2304
    • /
    • 2016
  • In this study, we designed and developed the virtual reality Exergame using the smart mat and the camera sensor for exercises in indoor environments. For detecting the gestures of a upper body of users, the KINECT camera based the gesture recognition algorithm used angles between user's joint information system was adopted, and the smart mat system including a LED equipment and Bluetooth communication module was developed for user's stepping data during the exercises that requires the gestures and stepping of users. Finally, the integrated virtual reality Exergame system was implement along with the Unity 3D engine and different kinds of user' virtual avatar characters with entertainment game contents such as displaying gesture guideline and a scoring function. Therefore, the designed system will useful for elders who need to improve cognitive ability and sense of balance or general users want to improve exercise ability and the indoor circumstances such home or wellness centers.

Tangible Interaction : Application for A New Interface Method for Mobile Device -Focused on development of virtual keyboard using camera input - (체감형 인터랙션 : 모바일 기기의 새로운 인터페이스 방법으로서의 활용 -카메라 인식에 의한 가상 키보드입력 방식의 개발을 중심으로 -)

  • 변재형;김명석
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.441-448
    • /
    • 2004
  • Mobile devices such as mobile phones or PDAs are considered as main interlace tools in ubiquitous computing environment. For searching information in mobile device, it should be possible for user to input some text as well as to control cursor for navigation. So, we should find efficient interlace method for text input in limited dimension of mobile devices. This study intends to suggest a new approach to mobile interaction using camera based virtual keyboard for text input in mobile devices. We developed a camera based virtual keyboard prototype using a PC camera and a small size LCD display. User can move the prototype in the air to control the cursor over keyboard layout in screen and input text by pressing a button. The new interaction method in this study is evaluated as competitive compared to mobile phone keypad in left input efficiency. And the new method can be operated by one hand and make it possible to design smaller device by eliminating keyboard part. The new interaction method can be applied to text input method for mobile devices requiring especially small dimension. And this method can be modified to selection and navigation method for wireless internet contents on small screen devices.

  • PDF

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Test of Vision Stabilizer for Unmanned Vehicle Using Virtual Environment and 6 Axis Motion Simulator (가상 환경 및 6축 모션 시뮬레이터를 이용한 무인차량 영상 안정화 장치 시험)

  • Kim, Sunwoo;Ki, Sun-Ock;Kim, Sung-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.227-233
    • /
    • 2015
  • In this study, an indoor test environment was developed for studying the vision stabilizer of an unmanned vehicle, using a virtual environment and a 6-axis motion simulator. The real driving environment was replaced by a virtual environment based on the Aberdeen Proving Ground bump test course for military tank testing. The vehicle motion was reproduced by a 6-axis motion simulator. Virtual reality driving courses were displayed in front of the vision stabilizer, which was located on the top of the motion simulator. The performance of the stabilizer was investigated by checking the image of the camera, and the pitch and roll angles of the stabilizer captured by the IMU sensor of the camera.

3D Stereoscopic Navigation of Buildings Considering Visual Perception (시각적 인지를 고려한 건축물의 3D 스테레오 내비게이션)

  • Shin, Il-Kyu;Yoon, Yeo-Jin;Choi, Jin-Won;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.63-72
    • /
    • 2012
  • As BIM(Building Information Modeling) is widely used in the construction process the need for exploring building models realistically is also growing. In this paper, we present a 3D stereoscopic navigation method for virtual buildings considering virtual perception. We first find out factors that may cause virtual discomfort while navigating stereoscopic building models, and then develop a method for automatically adjusting the range of virtual camera separation. In addition, we measure each user's JND(Just Noticeable Difference) in depth to adjust virtual camera separation and movement. The presented method can be used for various architectural applications by creating user-customized 3D stereoscopic contents.

Tele-operated Control of an Autonomous Mobile Robot Using a Virtual Force-reflection

  • Tack, Han-Ho;Kim, Chang-Geun;Kang, Shin-Chul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.2
    • /
    • pp.244-250
    • /
    • 2003
  • In this paper, the relationship between a slave robot and the uncertain remote environment is modeled as the impedance to generate the virtual force to feed back to the operator. For the control of a tele-operated mobile robot equipped with camera, the tele-operated mobile robot take pictures of remote environment and sends the visual information back to the operator over the Internet. Because of the limitation of communication bandwidth and narrow view-angles of camera, it is not possible to watch the environment clearly, especially shadow and curved areas. To overcome this problem, the virtual force is generated according to both the distance between the obstacle and robot and the approaching velocity of the obstacle. This virtual force is transferred back to the master over the Internet and the master(two degrees of freedom joystick), which can generate force, enables a human operator to estimate the position of obstacle in the remote environment. By holding this master, in spite of limited visual information, the operator can feel the spatial sense against the remote environment. This force reflection improves the performance of a tele-operated mobile robot significantly.

Dynamic Manipulation of a Virtual Object in Marker-less AR system Based on Both Human Hands

  • Chun, Jun-Chul;Lee, Byung-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.618-632
    • /
    • 2010
  • This paper presents a novel approach to control the augmented reality (AR) objects robustly in a marker-less AR system by fingertip tracking and hand pattern recognition. It is known that one of the promising ways to develop a marker-less AR system is using human's body such as hand or face for replacing traditional fiducial markers. This paper introduces a real-time method to manipulate the overlaid virtual objects dynamically in a marker-less AR system using both hands with a single camera. The left bare hand is considered as a virtual marker in the marker-less AR system and the right hand is used as a hand mouse. To build the marker-less system, we utilize a skin-color model for hand shape detection and curvature-based fingertip detection from an input video image. Using the detected fingertips the camera pose are estimated to overlay virtual objects on the hand coordinate system. In order to manipulate the virtual objects rendered on the marker-less AR system dynamically, a vision-based hand control interface, which exploits the fingertip tracking for the movement of the objects and pattern matching for the hand command initiation, is developed. From the experiments, we can prove that the proposed and developed system can control the objects dynamically in a convenient fashion.

Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV

  • Park, Jong-Il;Um, Gi-Mun;Ahn, Chung-Hyun;Ahn, Chie-Teuk
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.597-604
    • /
    • 2004
  • In stereoscopic television, there is a trade-off between visual comfort and 3-dimensional (3D) impact with respect to the baseline-stretch of a 3DTV camera. It is necessary to adjust the baseline-stretch at an appropriate the distance depending on the contents of a scene if we want to obtain a subjectively optimal quality of an image. However, it is very hard to obtain a small baseline-stretch using commercially available cameras of broadcasting quality where the sizes of the lens and CCD module are large. In order to overcome this limitation, we attempt to freely control the baseline-stretch of a stereoscopic camera by synthesizing the virtual views at the desired location of interval between two cameras. This proposed technique is based on the stereo matching and view synthesis techniques. We first obtain a dense disparity map using a hierarchical stereo matching with the edge-adaptive multiple shifted windows. Then, we synthesize the virtual views using the disparity map. Simulation results with various stereoscopic images demonstrate the effectiveness of the proposed technique.

  • PDF

Introducing Depth Camera for Spatial Interaction in Augmented Reality (증강현실 기반의 공간 상호작용을 위한 깊이 카메라 적용)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.62-67
    • /
    • 2009
  • Many interaction methods for augmented reality has attempted to reduce difficulties in tracking of interaction subjects by either allowing a limited set of three dimensional input or relying on auxiliary devices such as data gloves and paddles with fiducial markers. We propose Spatial Interaction (SPINT), a noncontact passive method that observes an occupancy state of the spaces around target virtual objects for interpreting user input. A depth-sensing camera is introduced for constructing the virtual space sensors, and then manipulating the augmented space for interaction. The proposed method does not require any wearable device for tracking user input, and allow versatile interaction types. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also minimized for more precise interaction. The exhibits of dynamic contents such as Miniature AR System (MINARS) could benefit from this fluid 3D user interface.

  • PDF