• Title/Summary/Keyword: global depth map

Search Result 32, Processing Time 0.026 seconds

Multi-view Synthesis Algorithm for the Better Efficiency of Codec (부복호화기 효율을 고려한 다시점 영상 합성 기법)

  • Choi, In-kyu;Cheong, Won-sik;Lee, Gwangsoon;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.375-384
    • /
    • 2016
  • In this paper, when stereo image, satellite view and corresponding depth maps were used as the input data, we propose a new method that convert these data to data format suitable for compressing, and then by using these format, intermediate view is synthesized. In the transmitter depth maps are merged to a global depth map and satellite view are converted to residual image corresponding hole region as out of frame area and occlusion region. And these images subsampled to reduce a mount of data and stereo image of main view are encoded by HEVC codec and transmitted. In the receiver intermediate views between stereo image and between stereo image and bit-rate are synthesized using decoded global depth map, residual images and stereo image. Through experiments, we confirm good quality of intermediate views synthesized by proposed format subjectively and objectively in comparison to intermediate views synthesized by MVD format versus total bit-rate.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

Depth Generation using Bifocal Stereo Camera System for Autonomous Driving (자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법)

  • Lee, Eun-Kyung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1311-1316
    • /
    • 2021
  • In this paper, we present a bifocal stereo camera system combining two cameras with different focal length cameras to generate stereoscopic image and their corresponding depth map. In order to obtain the depth data using the bifocal stereo camera system, we perform camera calibration to extract internal and external camera parameters for each camera. We calculate a common image plane and perform a image rectification for generating the depth map using camera parameters of bifocal stereo camera. Finally we use a SGM(Semi-global matching) algorithm to generate the depth map in this paper. The proposed bifocal stereo camera system can performs not only their own functions but also generates distance information about vehicles, pedestrians, and obstacles in the current driving environment. This made it possible to design safer autonomous vehicles.

Improvement of Disparity Map using Loopy Belief Propagation based on Color and Edge (Disparity 보정을 위한 컬러와 윤곽선 기반 루피 신뢰도 전파 기법)

  • Kim, Eun Kyeong;Cho, Hyunhak;Lee, Hansoo;Wibowo, Suryo Adhi;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.502-508
    • /
    • 2015
  • Stereo images have an advantage of calculating depth(distance) values which can not analyze from 2-D images. However, depth information obtained by stereo images has due to following reasons: it can be obtained by computation process; mismatching occurs when stereo matching is processing in occlusion which has an effect on accuracy of calculating depth information. Also, if global method is used for stereo matching, it needs a lot of computation. Therefore, this paper proposes the method obtaining disparity map which can reduce computation time and has higher accuracy than established method. Edge extraction which is image segmentation based on feature is used for improving accuracy and reducing computation time. Color K-Means method which is image segmentation based on color estimates correlation of objects in an image. And it extracts region of interest for applying Loopy Belief Propagation(LBP). For this, disparity map can be compensated by considering correlation of objects in the image. And it can reduce computation time because of calculating region of interest not all pixels. As a result, disparity map has more accurate and the proposed method reduces computation time.

Indoor Environment Modeling with Stereo Camera for Mobile Robot Navigation

  • Park, Sung-Kee;Park, Jong-Suk;Kim, Munsang;Lee, Chong-won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.34.5-34
    • /
    • 2002
  • In this paper we propose a new method for modeling indoor environment with stereo camera and suggest a localization method for mobile robot navigation on the basis of it. From the viewpoint of easiness in map building and exclusion of artificiality, the main idea of this paper is that environment is represented as global topological map and each node has omni-directional metric and color information by using stereo camera and pan/tilt mechanism. We use the depth and color information itself in image pixel as feature for environmental abstraction. In addition, we use only the depth and color information at horizontal centerline in image, where optical axis is passing. The usefulness of this m...

  • PDF

Development of Autonomous Driving Electric Vehicle for Logistics with a Robotic Arm (로봇팔을 지닌 물류용 자율주행 전기차 플랫폼 개발)

  • Eui-Jung Jung;Sung Ho Park;Kwang Woo Jeon;Hyunseok Shin;Yunyong Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.93-98
    • /
    • 2023
  • In this paper, the development of an autonomous electric vehicle for logistics with a robotic arm is introduced. The manual driving electric vehicle was converted into an electric vehicle platform capable of autonomous driving. For autonomous driving, an encoder is installed on the driving wheels, and an electronic power steering system is applied for automatic steering. The electric vehicle is equipped with a lidar sensor, a depth camera, and an ultrasonic sensor to recognize the surrounding environment, create a map, and recognize the vehicle location. The odometry was calculated using the bicycle motion model, and the map was created using the SLAM algorithm. To estimate the location of the platform based on the generated map, AMCL algorithm using Lidar was applied. A user interface was developed to create and modify a waypoint in order to move a predetermined place according to the logistics process. An A-star-based global path was generated to move to the destination, and a DWA-based local path was generated to trace the global path. The autonomous electric vehicle developed in this paper was tested and its utility was verified in a warehouse.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

Soil Related Parameters Assessment Comparing Runoff Analysis using Harmonized World Soil Database (HWSD) and Detailed Soil Map (HWSD와 정밀토양도를 이용한 유출해석시 토양 매개변수 특성 비교 평가)

  • Choi, Yun Seok;Jung, Young Hun;Kim, Joo Hun;Kim, Kyung-Tak
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.58 no.4
    • /
    • pp.57-66
    • /
    • 2016
  • Harmonized World Soil Database (HWSD) including the global soil information has been implemented to the runoff analysis in many watersheds of the world. However, its accuracy can be a critical issue in the modeling because of the limitation the low resolution reflecting the physical properties of soil in a watershed. Accordingly, this study attempted to assess the effect of HWSD in modeling by comparing parameters of the rainfall-runoff model using HWSD with the detailed soil map. For this, Grid based Rainfall-runoff Model (GRM) was employed in the Hyangseok watershed. The results showed that both of two soil maps in the rainfall-runoff model are able to well capture the observed runoff. However, compared with the detailed soil map, HWSD produced more uncertainty in the GRM parameters related to soil depth and hydraulic conductivity during the calibrations than the detailed soil map. Therefore, the uncertainty from the limited information on soil texture in HWSD should be considered for better calibration of a rainfall-runoff model.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.