• Title/Summary/Keyword: Vision Navigation System

Search Result 194, Processing Time 0.023 seconds

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

Self-Localization of Autonomous Mobile Robot using Multiple Landmarks (다중 표식을 이용한 자율이동로봇의 자기위치측정)

  • 강현덕;조강현
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.1
    • /
    • pp.81-86
    • /
    • 2004
  • This paper describes self-localization of a mobile robot from the multiple candidates of landmarks in outdoor environment. Our robot uses omnidirectional vision system for efficient self-localization. This vision system acquires the visible information of all direction views. The robot uses feature of landmarks whose size is bigger than that of others in image such as building, sculptures, placard etc. Robot uses vertical edges and those merged regions as the feature. In our previous work, we found the problem that landmark matching is difficult when selected candidates of landmarks belonging to region of repeating the vertical edges in image. To overcome these problems, robot uses the merged region of vertical edges. If interval of vertical edges is short then robot bundles them regarding as the same region. Thus, these features are selected as candidates of landmarks. Therefore, the extracted merged region of vertical edge reduces the ambiguity of landmark matching. Robot compares with the candidates of landmark between previous and current image. Then, robot is able to find the same landmark between image sequences using the proposed feature and method. We achieved the efficient self-localization result using robust landmark matching method through the experiments implemented in our campus.

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation (비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어)

  • Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

UCT/AGV Design and Implementation using steering function in automizing port system (조향 함수를 고려한 UCT/AGV 설계 및 구현)

  • 윤경식;이동훈;강진구;이권순;이장명
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.04a
    • /
    • pp.47-56
    • /
    • 2000
  • In this study, as the preliminary step for developing an unmanned vehicle to deliver a container-box, we designed and implemented Automatic Guided Vehicle(AGV) Simulator for the purpose of Port Facilities Automation. It is preferable to research the intelligent AGV for delivery all day long. For complementing AGV simulator driving, we used multiple-sensor systems with vision, ultrasonic, IR and adapted the high-speed wireless LAN that satisfies the IEEE 802.11 Standard for bi-directional communication between main processor in AGV and Host computer. Here, we mounted on bottom frame in AGV Pentium-III processor, which combine and compute the information from each sensor system and control the AGV driving, and used the 80C196KC micro-controller to control the actuating and steering motors.

  • PDF

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

Navigation of a Mobile Robot Using Hand Gesture Recognition (손 동작 인식을 이용한 이동로봇의 주행)

  • Kim, Il-Myeong;Kim, Wan-Cheol;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

Development of Sensor Device and Probability-based Algorithm for Braille-block Tracking (확률론에 기반한 점자블록 추종 알고리즘 및 센서장치의 개발)

  • Roh, Chi-Won;Lee, Sung-Ha;Kang, Sung-Chul;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.249-255
    • /
    • 2007
  • Under the situation of a fire, it is difficult for a rescue robot to use sensors such as vision sensor, ultrasonic sensor or laser distance sensor because of diffusion, refraction or block of light and sound by dense smoke. But, braille blocks that are installed for the visaully impaired at public places such as subway stations can be used as a map for autonomous mobile robot's localization and navigation. In this paper, we developed a laser sensor stan device which can detect braille blcoks in spite of dense smoke and integrated the device to the robot developed to carry out rescue mission in various hazardous disaster areas at KIST. We implemented MCL algorithm for robot's attitude estimation according to the scanned data and transformed a braille block map to a topological map and designed a nonlinear path tracking controller for autonomous navigation. From various simulations and experiments, we could verify that the developed laser sensor device and the proposed localization method are effective to autonomous tracking of braille blocks and the autonomous navigation robot system can be used for rescue under fire.

NAVUNGATION CONTROL OF A MOBILE ROBOT (이동로보트의 궤도관제기법)

  • 홍문성;이상용;한민용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.226-229
    • /
    • 1989
  • This paper presents a navigation control method for a vision guided robot. The robot is equipped with one camera, an IBM/AT compatible PC, and a sonar system. The robot can either follow track specified on a monitor screen or navigate to a destination avoiding any obstacles on its way. The robot finds its current position as well as its moving direction by taking an image of a circular pattern placed on the ceiling.

  • PDF