• Title/Summary/Keyword: Vision Navigation System

Search Result 194, Processing Time 0.024 seconds

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

Vision-Based Robust Control of Robot Manipulators with Jacobian Uncertainty (자코비안 불확실성을 포함하는 로봇 매니퓰레이터의 영상기반 강인제어)

  • Kim, Chin-Su;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2006
  • In this paper, a vision-based robust controller for tracking the desired trajectory a robot manipulator is proposed. The trajectory is generated to move the feature point into the desired position which the robot follows to reach to the desired position. To compensate the parametric uncertainties of the robot manipulator which contain in the control input, the robust controller is proposed. In addition, if there are uncertainties in the Jacobian, to compensate it, a vision-based robust controller which has control input is proposed as well in this paper. The stability of the closed-loop system is shown by Lyapunov method. The performance of the proposed method is demonstrated by simulations and experiments on a two degree of freedom 5-link robot manipulators.

  • PDF

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Development of docking system using laste slit beam (LSB를 이용한 Docking System 개발)

  • 김선호;박경택;최성락;변성태;이영석
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.309-314
    • /
    • 1999
  • The major movement block of the containers is range between apron and designation points on yard in container terminal. The yard tractor operated by human takes charge of it's movement in conventional container terminal. In unmanned container terminal, UCT(unmanned container transporter) has charge of the yard tractor's role and the navigation path is ordered from upper level control system. The unmanned container terminal facilities must have docking system that guided landing line to have high speed travelling and precision positioning in unmanned container terminal. The general method for docking uses the vision system with CCD camera, infra red, and laser. This paper describes the investigation for the developed docking method in view point of merit and demerit and introduces 속 result of developing the docking system with LSB(laser slit beam).

  • PDF

The Method of Virtual Reality-based Surgical Navigation to Reproduce the Surgical Plan in Spinal Fusion Surgery (척추 융합술에서 수술 계획을 재현하기 위한 가상현실 기반 수술 내비게이션 방법)

  • Song, Chanho;Son, Jaebum;Jung, Euisung;Lee, Hoyul;Park, Young-Sang;Jeong, Yoosoo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.8-15
    • /
    • 2022
  • In this paper, we proposed the method of virtual reality-based surgical navigation to reproduce the pre-planned position and angle of the pedicle screw in spinal fusion surgery. The goal of the proposed method is to quantitatively save the surgical plan by applying a virtual guide coordinate system and reproduce it in the surgical process through virtual reality. In the surgical planning step, the insertion position and angle of the pedicle screw are planned and stored based on the virtual guide coordinate system. To implement the virtual reality-based surgical navigation, a vision tracking system is applied to set the patient coordinate system and paired point-based patient-to-image registration is performed. In the surgical navigation step, the surgical plan is reproduced by quantitatively visualizing the pre-planned insertion position and angle of the pedicle screw using a virtual guide coordinate system. We conducted phantom experiment to verify the error between the surgical plan and the surgical navigation, the experimental result showed that target registration error was average 1.47 ± 0.64 mm when using the proposed method. We believe that our method can be used to accurately reproduce a pre-established surgical plan in spinal fusion surgery.

Detection of AGV's position and orientation using laser slit beam (회전 Laser 슬릿 빔을 이용한 AGV의 위치 및 자세의 검출)

  • 박건국;김선호;박경택;안중환
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.219-225
    • /
    • 2000
  • The major movement block of the containers have range between apron and designation points on yard in container terminal. The yard tractor operated by human takes charge of its movement in conventional container terminal. In automated container terminal, AGV(Automated Guided Vehicle) has charge of the yard tractor's role and the navigation path is ordered from upper level control system. The automated container terminal facilities must have the docking system to guide landing line to have high speed travelling and precision positioning. The general method for docking system uses the vision system with CCD camera, infra red, and laser. This paper describes the detection of AGV's position and orientation using laser slit beam to develop docking system.

  • PDF

Design of Safe Autonomous Navigation System for Deployable Bio-inspired Robot (전개형 생체모방로봇을 위한 안전한 자율주행시스템 설계)

  • Choi, Keun Ha;Han, Sang Kwon;Lee, Jinyi;Lee, Jin Woo;Ahn, Jung Do;Kim, Kyung-Soo;Kim, Soohyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.4
    • /
    • pp.456-462
    • /
    • 2014
  • In this paper, we present a deployable bio-inspired robot called the Pillbot-light, which utilizes a safe autonomous navigation system. The Pillbot-light is mounted the station robot, and can be operated in a disaster relief operation or military operation. However, the Pilbot-light has a challenge to navigate autonomously because the Pilbot-light cannot be equipped with various sensors. As a result, we propose a new robot system for autonomous navigation that the station robot controls Pillbot-light equipped with vision camera and CPU of high performance. This system detects obstacles based on the edge extraction using vision camera. Also, it cannot only achieve path planning using the hazard cost function, but also localization using the Particle Filter. And this system is verified by simulation and experiment.

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML (비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이)

  • Hyun, Kwon-Bang;To, Chong-Kil
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

Real-time Omni-directional Distance Measurement with Active Panoramic Vision

  • Yi, Soo-Yeong;Choi, Byoung-Wook;Ahuja, Narendra
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.2
    • /
    • pp.184-191
    • /
    • 2007
  • Autonomous navigation of mobile robot requires a ranging system for measurement of distance to environmental objects. It is obvious that the wider and the faster distance measurement gives a mobile robot more freedom in trajectory planning and control. The active omni-directional ranging system proposed in this paper is capable of obtaining the distance for all 3600 directions in real-time because of the omni-directional mirror and the structured light. Distance computation including the sensitivity analysis and the experiments on the omni-directional ranging are presented to verify the effectiveness of the proposed system.