• Title/Summary/Keyword: Vision-Based Navigation

Search Result 195, Processing Time 0.026 seconds

LATERAL CONTROL OF AUTONOMOUS VEHICLE USING SEVENBERG-MARQUARDT NEURAL NETWORK ALGORITHM

  • Kim, Y.-B.;Lee, K.-B.;Kim, Y.-J.;Ahn, O.-S.
    • International Journal of Automotive Technology
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2002
  • A new control method far vision-based autonomous vehicle is proposed to determine navigation direction by analyzing lane information from a camera and to navigate a vehicle. In this paper, characteristic featured data points are extracted from lane images using a lane recognition algorithm. Then the vehicle is controlled using new Levenberg-Marquardt neural network algorithm. To verify the usefulness of the algorithm, another algorithm, which utilizes the geometric relation of a camera and vehicle, is introduced. The second one involves transformation from an image coordinate to a vehicle coordinate, then steering is determined from Ackermann angle. The steering scheme using Ackermann angle is heavily depends on the correct geometric data of a vehicle and a camera. Meanwhile, the proposed neural network algorithm does not need geometric relations and it depends on the driving style of human driver. The proposed method is superior than other referenced neural network algorithms such as conjugate gradient method or gradient decent one in autonomous lateral control .

Mobile Robot Obstacle Avoidance using Visual Detection of a Moving Object (동적 물체의 비전 검출을 통한 이동로봇의 장애물 회피)

  • Kim, In-Kwen;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.212-218
    • /
    • 2008
  • Collision avoidance is a fundamental and important task of an autonomous mobile robot for safe navigation in real environments with high uncertainty. Obstacles are classified into static and dynamic obstacles. It is difficult to avoid dynamic obstacles because the positions of dynamic obstacles are likely to change at any time. This paper proposes a scheme for vision-based avoidance of dynamic obstacles. This approach extracts object candidates that can be considered moving objects based on the labeling algorithm using depth information. Then it detects moving objects among object candidates using motion vectors. In case the motion vectors are not extracted, it can still detect the moving objects stably through their color information. A robot avoids the dynamic obstacle using the dynamic window approach (DWA) with the object path estimated from the information of the detected obstacles. The DWA is a well known technique for reactive collision avoidance. This paper also proposes an algorithm which autonomously registers the obstacle color. Therefore, a robot can navigate more safely and efficiently with the proposed scheme.

  • PDF

Development of a Localization System Based on VLC Technique for an Indoor Environment

  • Yi, Keon Young;Kim, Dae Young;Yi, Kwang Moo
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.436-442
    • /
    • 2015
  • In this paper, we develop an indoor localization device which embeds localization information into indoor light-emitting-diodes (LED) lighting systems. The key idea of our device is the use of the newly proposed "bit stuffing method". Through the use of stuff bits, our device is able to measure signal strengths even in transient states, which prohibits interference between lighting signals. The stuff bits also scatter the parts of the signal where the LED is turned on, thus provides quality indoor lighting. Additionally, for the indoor localization system based on RSSI and TDM to be practical, we propose methods for the control of LED lamps and compensation of received signals. The effectiveness of the proposed scheme is validated through experiments with a low-cost implementation including an indoor navigation task.

Vision Based Outdoor Terrain Classification for Unmanned Ground Vehicles (무인차량 적용을 위한 영상 기반의 지형 분류 기법)

  • Sung, Gi-Yeul;Kwak, Dong-Min;Lee, Seung-Youn;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.372-378
    • /
    • 2009
  • For effective mobility control of unmanned ground vehicles in outdoor off-road environments, terrain cover classification technology using passive sensors is vital. This paper presents a novel method far terrain classification based on color and texture information of off-road images. It uses a neural network classifier and wavelet features. We exploit the wavelet mean and energy features extracted from multi-channel wavelet transformed images and also utilize the terrain class spatial coordinates of images to include additional features. By comparing the classification performance according to applied features, the experimental results show that the proposed algorithm has a promising result and potential possibilities for autonomous navigation.

A Study on the Distance Measurement Algorithm using Feature-Based Matching for Autonomous Navigation

  • Song, Hyun-Sung;Lee, Ho-Soon;Jeong, Jun-Ik;Son, Kyung-Hee;Rho, Do-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.63.2-63
    • /
    • 2001
  • It is necessary to distance measurement to detect about obstacles and front vehicles to autonomously navigate. In this paper, we propose an algorithm using stereo vision. It is as follows this algorithm´s procedure. First, It has detected a front vehicle´s common edges from left and right images by image processing. We select number plate of a front vehicle as edges. Then, we estimate distance by triangle measurement method after stereomatching using corner points of the plate´s edges as feature-based points. Experimental results show errors and values compand with experimental ones after set up distance between vehicles in advance.

  • PDF

A Framework for Cognitive Agents

  • Petitt, Joshua D.;Braunl, Thomas
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.229-235
    • /
    • 2003
  • We designed a family of completely autonomous mobile robots with local intelligence. Each robot has a number of on-board sensors, including vision, and does not rely on global positioning systems The on-board embedded controller is sufficient to analyze several low-resolution color images per second. This enables our robots to perform several complex tasks such as navigation, map generation, or providing intelligent group behavior. Not being limited to playing the game of soccer and being completely autonomous, we are also looking at a number of other interesting scenarios. The robots can communicate with each other, e.g. for exchanging positions, information about objects or just the local states they are currently in (e.g. sharing their current objectives with other robots in the group). We are particularly interested in the differences between a behavior-based approach versus a traditional control algorithm at this still very low level of action.

A Computer Vision-based Assistive Mobile Application for the Visually Impaired (컴퓨터 비전 기반 시각 장애 지원 모바일 응용)

  • Secondes, Arnel A.;Otero, Nikki Anne Dominique D.;Elijorde, Frank I.;Byun, Yung-Cheol
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2138-2144
    • /
    • 2016
  • People with visual disabilities suffer environmentally, socially, and technologically. Navigating through places and recognizing objects are already a big challenge for them who require assistance. This study aimed to develop an android-based assistive application for the visually impaired. Specifically, the study aimed to create a system that could aid visually impaired individuals performs significant tasks through object recognition and identifying locations through GPS and Google Maps. In this study, the researchers used an android phone allowing a visually impaired individual to go from one place to another with the aid of the application. Google Maps is integrated to utilize GPS in identifying locations and giving distance directions and the system has a cloud server used for storing pinned locations. Furthermore, Haar-like features were used in object recognition.

Enhanced Extraction of Traversable Region by Combining Scene Clustering with 3D World Modeling based on CCD/IR Image (CCD/IR 영상 기반의 3D 월드모델링과 클러스터링의 통합을 통한 주행영역 추출 성능 개선)

  • Kim, Jun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.11 no.4
    • /
    • pp.107-115
    • /
    • 2008
  • Accurate extraction of traversable region is a critical issue for autonomous navigation of unmanned ground vehicle(UGV). This paper introduces enhanced extraction of traversable region by combining scene clustering with 3D world modeling using CCD(Charge-Coupled Device)/IR(Infra Red) image. Scene clustering is developed with K-means algorithm based on CCD and IR image. 3D world modeling is developed by fusing CCD and IR stereo image. Enhanced extraction of traversable regions is obtained by combining feature of extraction with a clustering method and a geometric characteristic of terrain derived by 3D world modeling.

Relative Navigation Study Using Multiple PSD Sensor and Beacon Module Based on Kalman Filter (복수 PSD와 비콘을 이용한 칼만필터 기반 상대항법에 대한 연구)

  • Song, Jeonggyu;Jeong, Junho;Yang, Seungwon;Kim, Seungkeun;Suk, Jinyoung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.3
    • /
    • pp.219-229
    • /
    • 2018
  • This paper proposes Kalman Filter-based relative navigation algorithms for proximity tasks such as rendezvous/docking/cluster-operation of spacecraft using PSD Sensors and Infrared Beacon Modules. Numerical simulations are performed for comparative analysis of the performance of each relative-navigation technique. Based on the operation principle and optical modeling of the PSD Sensor and the Infrared Beacon Module used in the relative navigation algorithm, a measurement model for the Kalman filter is constructed. The Extended Kalman Filter(EKF) and the Unscented Kalman Filter(UKF) are used as probabilistic relative navigation based on measurement fusion to utilize kinematics and dynamics information on translational and rotation motions of satellites. Relative position and relative attitude estimation performance of two filters is compared. Especially, through the simulation of various scenarios, performance changes are also investigated depending on the number of PSD Sensors and IR Beacons in target and chaser satellites.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.