• 제목/요약/키워드: Depth image error

Search Result 144, Processing Time 0.026 seconds

Region-Based Error Concealment of Depth Map in Multiview Video (영역 구분을 통한 다시점 영상의 깊이맵 손상 복구 기법)

  • Kim, Wooyeun;Shin, Jitae;Oh, Byung Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2530-2538
    • /
    • 2015
  • The pixel value of depth image is depth value so that different objects which are placed on nearby position have similar pixel value. Moreover, the pixels of depth image have distinct pixel values compared to adjacent pixels while those of color image has very similar values. Accordingly distorted depth image of multiview video plus depth (MVD) needs proper error concealment methods considering the characteristics of depth image when transmission errors are happened. In this paper, classifying regions of depth image to consider edge directions and then applying adaptive error concealment methods to each region are proposed. Recovered depth images utilize with multiview video data to synthesize intermediate-view point video. The synthesized view is evaluated by objective quality metrics to demonstrate proposed method performance.

Depth error calibration of maladjusted stereo cameras for translation of instrumented image information in dynamic objects (동영상 정보의 계측정보 전송을 위한 비선형 스테레오 카메라의 오차 보정)

  • Kim, Jong-Man;Kim, Yeong-Min;Hwang, Jong-Sun;Lim, Byung-Hyun
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.05b
    • /
    • pp.109-114
    • /
    • 2003
  • Depth error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount. Such error compensation effect with the calibrated pixel distance parameter is demonstrated with various experimental results.

  • PDF

The accuracy of the depth perception of 3-dimensional images (이안식 입체영상에서 심도지각의 정확성에 관한 연구)

  • Cho, Am
    • Journal of the Ergonomics Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.37-46
    • /
    • 1994
  • The accurate error size and discrimination region in the perception of depth amount from 3-dimensional images by the human visual system will be the basic data for the utilization and application of the binocular 3- eimensional image system. This paper is focused on studying the accuracy of the depth amount perceived from 3- dimensional images by the human visual system. From the performed experiment, the following results have been obtained: (1) The depth amount perceived from the binocular 3- dimensional images has been displayed by a proper scale of distance, and found to be imprecise and also have a large variance. (2) In utilizing the binocular 3-dimensional image system, it seems more appropriate to make the images viewed outward rather than inward from the screen in the regard of error and variance. (3) The binocular 3-dimensional image system can be effectively applied to displaying unreal space, for example, the layout of room in design, from the viewpoint of perception characteristics of depth amount.

  • PDF

The accuracy of the depth perception of 3-dimensional images (이안식 입체영상에서 심도지각의 정확성에 관한 연구)

  • Cho, Am
    • Proceedings of the ESK Conference
    • /
    • 1994.04a
    • /
    • pp.19-31
    • /
    • 1994
  • The accurate error size and discrimination region in the perception of depth amount from 3- dimensional images by the human visual system will be the basic data for the utilization and application of the binocular 3 - Dimensional image system. This paper is focused on studying the accuracy of the depth amount perceived from 3-dimensional images by the human visual system. From the performed experiment, the following results have been obtained: (1) The depth amount perceived from the binocular 3-dimensional images has been displayed by a proper scale of distance, and found to be imprecise and also have a large variance. (2) In utilizing the binocular 3-dimensional image system, it seems more appropriate to make the images viewed outward rather than inward from the screen in the regard of error and variance. (3) The binocular 3- dimensional image system can be effectively applied to displaying unreal space, for example, the layout of room in design, from the viewpoint of perception characteristics of depth amount.

  • PDF

A Landmark Based Localization System using a Kinect Sensor (키넥트 센서를 이용한 인공표식 기반의 위치결정 시스템)

  • Park, Kwiwoo;Chae, JeongGeun;Moon, Sang-Ho;Park, Chansik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.1
    • /
    • pp.99-107
    • /
    • 2014
  • In this paper, a landmark based localization system using a Kinect sensor is proposed and evaluated with the implemented system for precise and autonomous navigation of low cost robots. The proposed localization method finds the positions of landmark on the image plane and the depth value using color and depth images. The coordinates transforms are defined using the depth value. Using coordinate transformation, the position in the image plane is transformed to the position in the body frame. The ranges between the landmarks and the Kinect sensor are the norm of the landmark positions in body frame. The Kinect sensor position is computed using the tri-lateral whose inputs are the ranges and the known landmark positions. In addition, a new matching method using the pin hole model is proposed to reduce the mismatch between depth and color images. Furthermore, a height error compensation method using the relationship between the body frame and real world coordinates is proposed to reduce the effect of wrong leveling. The error analysis are also given to find out the effect of focal length, principal point and depth value to the range. The experiments using 2D bar code with the implemented system show that the position with less than 3cm error is obtained in enclosed space($3,500mm{\times}3,000mm{\times}2,500mm$).

Optimized Multiple Description Lattice Vector Quantization Coding for 3D Depth Image

  • Zhang, Huiwen;Bai, Huihui;Liu, Meiqin;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1140-1154
    • /
    • 2015
  • Multiple Description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. Lattice vector quantization (LVQ) is a significant version of MD techniques to design an MD image coder. However, different from the traditional 2D texture image, the 3D depth image has its own special characteristics, which should be taken into account for efficient compression. In this paper, an optimized MDLVQ scheme is proposed in view of the characteristics of 3D depth image. First, due to the sparsity of depth image, the image blocks can be classified into edge blocks and smooth blocks, which are encoded by different modes. Furthermore, according to the boundary contents in edge blocks, the step size of LVQ can be regulated adaptively for each block. Experimental results validate the effectiveness of the proposed scheme, which show better rate distortion performance compared with the conventional MDLVQ.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Conversion Method of 3D Point Cloud to Depth Image and Its Hardware Implementation (3차원 점군데이터의 깊이 영상 변환 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Jo, Gippeum;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2443-2450
    • /
    • 2014
  • In the motion recognition system using depth image, the depth image is converted to the real world formed 3D point cloud data for efficient algorithm apply. And then, output depth image is converted by the projective world after algorithm apply. However, when coordinate conversion, rounding error and data loss by applied algorithm are occurred. In this paper, when convert 3D point cloud data to depth image, we proposed efficient conversion method and its hardware implementation without rounding error and data loss according image size change. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

Implementation of Paper Keyboard Piano with a Kinect (키넥트를 이용한 종이건반 피아노 구현 연구)

  • Lee, Jung-Chul;Kim, Min-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.219-228
    • /
    • 2012
  • In this paper, we propose a paper keyboard piano implementation using the finger movement detection with the 3D image data from a kinect. Keyboard pattern and keyboard depth information are extracted from the color image and depth image to detect the touch event on the paper keyboard and to identify the touched key. Hand region detection error is unavoidable when using the simple comparison method between input depth image and background depth image, and this error is critical in key touch detection. Skin color is used to minimize the error. And finger tips are detected using contour detection with area limit and convex hull. Finally decision of key touch is carried out with the keyboard pattern information at the finger tip position. The experimental results showed that the proposed method can detect key touch with high accuracy. Paper keyboard piano can be utilized for the easy and convenient interface for the beginner to learn playing piano with the PC-based learning software.

A Study On Positioning Of Mouse Cursor Using Kinect Depth Camera (Kinect Depth 카메라를이용한 마우스 커서의 위치 선정에 관한 연구)

  • Goo, Bong-Hoe;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.478-484
    • /
    • 2014
  • In this paper, we propose new algorithm for positioning of mouse cursor using fingertip direction on kinect depth camera. The proposed algorithm uses center of parm points from distance transform when fingertip point toward screen. Otherwise, algorithm use fingertip points. After image preprocessing, the center of parm points is calculated from distance transform results. If the direction of the finger towards the camera becomes close to the distance between the fingertip point and center of parm point, it is possible to improve the accuracy of positioning by using the center of parm point. After remove arm on image, the fingertip points is obtained by using a pixel on the long distance from the center of the image. To calculate accuracy of mouse positioning, we selected any 5 points. Also, we calculated error rate between reference points and mouse points by performed 500 times. The error rate results could be confirmed the accuracy of our algorithm indicated an average error rate of less than 11%.