• Title/Summary/Keyword: Map generation

Search Result 777, Processing Time 0.038 seconds

Hadoop Based Wavelet Histogram for Big Data in Cloud

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.668-676
    • /
    • 2017
  • Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Implementing a Depth Map Generation Algorithm by Convolutional Neural Network (깊이맵 생성 알고리즘의 합성곱 신경망 구현)

  • Lee, Seungsoo;Kim, Hong Jin;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.3-10
    • /
    • 2018
  • Depth map has been utilized in a varity of fields. Recently research on generating depth map by artificial neural network (ANN) has gained much interest. This paper validates the feasibility of implementing the ready-made depth map generation by convolutional neural network (CNN). First, for a given image, a depth map is generated by the weighted average of a saliency map as well as a motion history image. Then CNN network is trained by test images and depth maps. The objective and subjective experiments are performed on the CNN and showed that the CNN can replace the ready-made depth generation method.

Co-Pilot Agent for Vehicle/Driver Cooperative and Autonomous Driving

  • Noh, Samyeul;Park, Byungjae;An, Kyounghwan;Koo, Yongbon;Han, Wooyong
    • ETRI Journal
    • /
    • v.37 no.5
    • /
    • pp.1032-1043
    • /
    • 2015
  • ETRI's Co-Pilot project is aimed at the development of an automated vehicle that cooperates with a driver and interacts with other vehicles on the road while obeying traffic rules without collisions. This paper presents a core block within the Co-Pilot system; the block is named "Co-Pilot agent" and consists of several main modules, such as road map generation, decision-making, and trajectory generation. The road map generation builds road map data to provide enhanced and detailed map data. The decision-making, designed to serve situation assessment and behavior planning, evaluates a collision risk of traffic situations and determines maneuvers to follow a global path as well as to avoid collisions. The trajectory generation generates a trajectory to achieve the given maneuver by the decision-making module. The system is implemented in an open-source robot operating system to provide a reusable, hardware-independent software platform; it is then tested on a closed road with other vehicles in several scenarios similar to real road environments to verify that it works properly for cooperative driving with a driver and automated driving.

A study on map generation of autonomous Mobile Robot using stereo vision system (스테레오 비젼 시스템을 이용한 자율 이동 로봇의 지도 작성에 관한 연구)

  • Son, Young-Seop;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2200-2202
    • /
    • 1998
  • Autonomous mobile robot provide many functions such as sensing, processing, and driving. For more intelligent jobs, more intelligent functions are to be added and the existing functions may be updated. To execute a job autonomous mobile robot has a information of surrounding environment. So, robot uses sonar sensor, vision sensor and so on. Obtained sensor information is used map generation. This paper is focused on map generation using stereo vision system.

  • PDF

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

A Study of Generating Depth map for 3D Space Structure Recovery

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Eung-Kon;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1855-1862
    • /
    • 2010
  • In virtual reality, there are increasing qualitative development in service technologies for realtime interaction system development, 3- dimensional contents, 3D TV and augment reality services. These services experience difficulties to generate depth value that is essential to recover 3D space to form solidity on existing contents. Hence, research for the generation of effective depth-map using 2D is necessary. This thesis will describe a shortcoming of an existing depth-map generation for the recovery of 3D space using 2D image and will propose an enhanced depth-map generation algorithm that complements a shortcoming of existing algorithms and utilizes the definition of depth direction based on the vanishing point within image.

A Study on the 3-D CNC Cutting Planning and Simulation by Z-Map Model (Z-Map 모델을 이용한 3차원 CNC 가공계획 및 절삭시뮬레이션에 관한 연구)

  • 송수용;김석일
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.683-688
    • /
    • 1994
  • Recently, the Z-Map model has been used widely to represent the three dimensional geometric shape and to achieve the cross-section and point evaluation of the shape. In this paper, the CNC cutting planning and simulation modules for product with three dimensional geometric shape are realized based on the Z-Map model. The realized system has the various capabilities related to the automatic generation of tool path for the rough and finish cutting processes, the automatic elimination of overcut, the automatic generation of CNC program for a machining center and the cutting simulation. Especially, the overcut-free tool path is obtained by using the CL Z-Map models which are composed of the offset surfaces of the geometric shape of product.

  • PDF

A Study for Depth-map Generation using Vanishing Point (소실점을 이용한 Depth-map 생성에 관한 연구)

  • Kim, Jong-Chan;Ban, Kyeong-Jin;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.329-338
    • /
    • 2011
  • Recent augmentation reality demands more realistic multimedia data with the mixture of various media. High-technology for multimedia data which combines existing media data with various media such as audio and video dominates entire media industries. In particular, there is a growing need to serve augmentation reality, 3-dimensional contents and realtime interaction system development which are communication method and visualization tool in Internet. The existing services do not correspond to generate depth value for 3-dimensional space structure recovery which is to form solidity in existing contents. Therefore, it requires research for effective depth-map generation using 2-dimensional video. Complementing shortcomings of existing depth-map generation method using 2-dimensional video, this paper proposes an enhanced depth-map generation method that defines the depth direction in regard to loss location in a video in which none of existing algorithms has defined.