• Title/Summary/Keyword: Image Navigation

Search Result 705, Processing Time 0.029 seconds

A Study of Head-Up Display System for Automotive Application (Head-Up Display 장치의 자동차 적용을 위한 연구)

  • Yang, In-Beom;Lee, Hyuck-Kee;Kim, Beong-Woo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.15 no.4
    • /
    • pp.27-32
    • /
    • 2007
  • Head-Up Display system makes it possible for the driver to be informed of important vehicle data such as vehicle speed, engine RPM or navigation data without taking the driver's eyes off the road. Long focal length optics, LCD with bright illumination, image generator and vehicle interface controllers are key parts of head-up display system. All these parts have been designed, developed and applied to the test vehicle. Virtual images are located about 2m ahead of the driver's eye by projecting it onto the windshield just below the driver's line of sight. Developed head-up display system shows satisfactory results for future commercialization.

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Multi-information fusion based localization algorithm for Mars rover

  • Jiang, Xiuqiang;Li, Shuang;Tao, Ting;Wang, Bingheng
    • Advances in aircraft and spacecraft science
    • /
    • v.1 no.4
    • /
    • pp.455-469
    • /
    • 2014
  • High-precision autonomous localization technique is essential for future Mars rovers. This paper addresses an innovative integrated localization algorithm using a multiple information fusion approach. Firstly, the output of IMU is employed to construct the two-dimensional (2-D) dynamics equation of Mars rover. Secondly, radio beacon measurement and terrain image matching are considered as external measurements and included into the navigation filter to correct the inertial basis and drift. Then, extended Kalman filtering (EKF) algorithm is designed to estimate the position state of Mars rovers and suppress the measurement noise. Finally, the localization algorithm proposed in this paper is validated by computer simulation with different parameter sets.

A New Refinement Method for Structure from Stereo Motion (스테레오 연속 영상을 이용한 구조 복원의 정제)

  • 박성기;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.11
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

Obstacle Avoidance System Using a Single Camera and LMNN Fuzzy Controller (단일 영상과 LM 신경망 퍼지제어기를 적용한 장애물 회피 시스템)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.2
    • /
    • pp.192-197
    • /
    • 2009
  • In this paper, we proposed the obstacle avoidance system using a single camera image and LM(Levenberg-Marquart) neural network fuzzy controller. According to a robot technology adapt to various fields of industry and public, the robot has to move using self-navigation and obstacle avoidance algorithms. When the robot moves to target point, obstacle avoidance is must-have technology. So in this paper, we present the algorithm that avoidance method based on fuzzy controller by sensing data and image information from a camera and using the LM neural network to minimize the moving error. And then to verify the system performance of the simulation test.

Panorama Field Rendering based on Depth Estimation (깊이 추정에 기반한 파노라마 필드 렌더링)

  • Jung, Myoungsook;Han, JungHyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.15-22
    • /
    • 2000
  • One of the main research trends in image based modeling and rendering is how to implement plenoptic function. For this purpose, this paper proposes a novel approach based on a set of randomly placed panoramas. The proposed approach, first of all, adopts a simple computer vision technique to approximate omni-directional depth information of the surrounding scene, and then corrects/interpolates panorama images to generate an output image at a vantage viewpoint. Implementation results show that the proposed approach achieves smooth navigation at an interactive rate.

  • PDF

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Development of the integrated management simulation system for the target correction (표적 수정이 가능한 사용자 개입 통합 관리 모의 시스템 개발)

  • Park, Woosung;Oh, TaeWon;Park, TaeHyun;Lee, YongWon;Kim, Kibum;Kwon, Kijeong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.7
    • /
    • pp.600-609
    • /
    • 2017
  • We designed a target management integration system that enables us to select the final target manually or automatically from seeker's sensor image. The integrated system was developed separately for the air vehicle system and the ground system. The air vehicle system simulates the motion dynamics and the sensor image of the air vehicle, and the ground system is composed of the target template image module and the ground control center module. The flight maneuver of the air vehicle is based on pseudo 6-degree of freedom motion equation and the proportional navigation guidance. The sensor image module was developed using the known infrared(IR) image rendering method, and was verified by comparing the rendered image to that of a commercial software. The ground control center module includes an user interface that can display as much information to meet user needs. Finally, we verified the integrated system with simulated impact target mission of the air vehicle, by confirming the final target change and the shot down result of the user's intervention.

Development of Ubuntu-based Raspberry Pi 3 of the image recognition system (우분투 기반 라즈베리 파이3의 영상 인식 시스템 개발)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.868-871
    • /
    • 2016
  • Recently, Unmanned vehicle and Wearable Technology using iot research is being carried out. The unmanned vehicle is the result of it technology. Robots, autonomous navigation vehicle and obstacle avoidance, data communications, power, and image processing, technology integration of a unmanned vehicle or an unmanned robot. The final goal of the unmanned vehicle manual not autonomous by destination safely and quickly reaching. This paper managed to cover One of the key skills of unmanned vehicle is to image processing. Currently battery technology of unmanned vehicle can drive for up to 1 hours. Therefore, we use the Raspberry Pi 3 to reduce power consumption to a minimum. Using the Raspberry Pi 3 and to develop an image recognition system. The goal is to propose a system that recognizes all the objects in the image received from the camera.

  • PDF