• Title/Summary/Keyword: 3D Depth

Search Result 2,619, Processing Time 0.028 seconds

Depth-Conversion in Integral Imaging Three-Dimensional Display by Means of Elemental Image Recombination (3차원 영상 재생을 위한 집적결상법에서 기본영상 재조합을 통한 재생영상의 깊이 변환)

  • Ser, Jang-Il;Shin, Seung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.1
    • /
    • pp.24-30
    • /
    • 2007
  • We have studied depth conversion of a reconstructed image by means of recombination of the elemental images in the integral imaging system for 3D display. With the recombination, depth conversion to the pseudoscopic, the orthoscopic, the real or the virtual as well as to arbitrary depth without any distortion is possible under proper conditions. The conditions on the recombinations for the depth conversion are theoretically derived. The reconstructed images using the converted elemental images are presented.

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

A Novel Method for Hand Posture Recognition Based on Depth Information Descriptor

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.763-774
    • /
    • 2015
  • Hand posture recognition has been a wide region of applications in Human Computer Interaction and Computer Vision for many years. The problem arises mainly due to the high dexterity of hand and self-occlusions created in the limited view of the camera or illumination variations. To remedy these problems, a hand posture recognition method using 3-D point cloud is proposed to explicitly utilize 3-D information from depth maps in this paper. Firstly, hand region is segmented by a set of depth threshold. Next, hand image normalization will be performed to ensure that the extracted feature descriptors are scale and rotation invariant. By robustly coding and pooling 3-D facets, the proposed descriptor can effectively represent the various hand postures. After that, SVM with Gaussian kernel function is used to address the issue of posture recognition. Experimental results based on posture dataset captured by Kinect sensor (from 1 to 10) demonstrate the effectiveness of the proposed approach and the average recognition rate of our method is over 96%.

Super-multiview windshield display for driving assistance

  • Urano, Yohei;Kashiwada, Shinji;Ando, Hiroshi;Nakamura, Koji;Takaki, Yasuhiro
    • Journal of Information Display
    • /
    • v.12 no.1
    • /
    • pp.43-46
    • /
    • 2011
  • A three-dimensional windshield display (3D-WSD) can present driving information at the same depth as the objects in the outside scene. Herein, a super-multiview 3D-WSD is proposed because the super-multiview display technique provides smooth motion parallax. Motion parallax is the only physiological cue for perceiving the depth of a 3D image displayed at a far distance, which cannot be perceived by vergence and binocular parallax. A prototype system with 36 views was constructed, and the discontinuity of motion parallax and accuracy of depth perception were evaluated.

Depth-map coding using the block-based decision of the bitplane to be encoded (블록기반 부호화할 비트평면 결정을 이용한 깊이정보 맵 부호화)

  • Kim, Kyung-Yong;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.232-235
    • /
    • 2010
  • This paper proposes an efficient depth-map coding method. The adaptive block-based depth-map coding method decides the number of bit planes to be encoded according to the quantization parameters to obtain the desired bit rates. So, the depth-map coding using the block-based decision of the bit-plane to be encoded proposes to free from the constraint of the quantization parameters. Simulation results show that the proposed method, in comparison with the adaptive block-based depth-map coding method, improves the average BD-rate savings by 3.5% and the average BD-PSNR gains by 0.25dB.

3D Map Generation System for Indoor Autonomous Navigation (실내 자율 주행을 위한 3D Map 생성 시스템)

  • Moon, SungTae;Han, Sang-Hyuck;Eom, Wesub;Kim, Youn-Kyu
    • Aerospace Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.140-148
    • /
    • 2012
  • For autonomous navigation, map, pose tracking, and finding the shortest path are required. Because there is no GPS signal in indoor environment, the current position should be recognized in the 3D map by using image processing or something. In this paper, we explain 3D map creation technology by using depth camera like Kinect and pose tracking in 3D map by using 2D image taking from camera. In addition, the mechanism of avoiding obstacles is discussed.

Virtual Viewpoint Image Synthesis Algorithm using Multi-view Geometry (다시점 카메라 모델의 기하학적 특성을 이용한 가상시점 영상 생성 기법)

  • Kim, Tae-June;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1154-1166
    • /
    • 2009
  • In this paper, we propose algorithms for generating high quality virtual intermediate views on the baseline or out of baseline. In this proposed algorithm, depth information as well as 3D warping technique is used to generate the virtual views. The coordinate of real 3D image is calculated from the depth information and geometrical characteristics of camera and the calculated 3D coordinate is projected to the 2D plane at arbitrary camera position and results in 2D virtual view image. Through the experiments, we could show that the generated virtual view image on the baseline by the proposed algorithm has better PSNR at least by 0.5dB and we also could cover the occluded regions more efficiently for the generated virtual view image out of baseline by the proposed algorithm.

Performance Evaluation of Stealth Chamber as a Novel Reference Chamber for Measuring Percentage Depth Dose and Profile of VitalBeam Linear Accelerator (VitalBeam 선형가속기의 심부선량백분율과 측방선량분포 측정을 위한 새로운 기준 전리함으로서 스텔스 전리함의 성능 평가)

  • Kim, Yon-Lae;Chung, Jin-Beom;Kang, Seong-Hee;Kang, Sang-Won;Kim, Kyeong-Hyeon;Jung, Jae-Yong;Shin, Young-Joo;Suh, Tae-Suk;Lee, Jeong-Woo
    • Journal of radiological science and technology
    • /
    • v.41 no.3
    • /
    • pp.201-207
    • /
    • 2018
  • The purpose of this study is to evaluate the performance of a "stealth chamber" as a novel reference chamber for measuring percentage depth dose (PDD) and profile of 6, 8 and 10 MV photon energies. The PDD curves and dose profiles with fields ranging from $3{\times}3$ to $25{\times}25cm^2$ were acquired from measurements by using the stealth chamber and CC 13 chamber as reference chamber. All measurements were performed with Varian VitalBeam linear accelerator. In order to assess the performance of stealth chamber, PDD curves and profiles measured with stealth chamber were compared with measurement data using CC13 chamber. For PPDs measured with both chambers, the dosimetric parameters such as $d_{max}$ (depth of maximum dose), $D_{50}$ (PDD at 50 mm depth), and $D_{100}$ (PDD at 100 mm depth) were analyzed. Moreover, root mean square error (RMSE) values for profiles at $d_{max}$ and 100 mm depth were evaluated. The measured PDDs and profiles between the stealth chamber and CC13 chamber as reference detector had almost comparable. For PDDs, the evaluated dosimetric parameters were observed small difference (<1%) for all energies and field sizes, except for $d_{max}$ less than 2 mm. In addition, the difference of RMSEs for profiles at $d_{max}$ and 100 mm depth was similar for both chambers. This study confirmed that the use of stealth chamber for measuring commission beam data is a feasible as reference chamber for fields ranging from $3{\times}3$ to $20{\times}20cm^2$. Furthermore, it has an advantage with respect to measurement of the small fields (less than $3{\times}3cm^2$ field) although not performed in this study.

A Novel Selective Frame Discard Method for 3D Video over IP Networks

  • Chung, Young-Uk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1209-1221
    • /
    • 2010
  • Three dimensional (3D) video is expected to be an important application for broadcast and IP streaming services. One of the main limitations for the transmission of 3D video over IP networks is network bandwidth mismatch due to the large size of 3D data, which causes fatal decoding errors and mosaic-like damage. This paper presents a novel selective frame discard method to address the problem. The main idea of the proposed method is the symmetrical discard of the two dimensional (2D) video frame and the depth map frame. Also, the frames to be discarded are selected after additional consideration of the playback deadline, the network bandwidth, and the inter-frame dependency relationship within a group of pictures (GOP). It enables the efficient utilization of the network bandwidth and high quality 3D IPTV service. The simulation results demonstrate that the proposed method enhances the media quality of 3D video streaming even in the case of bad network conditions.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.