• Title/Summary/Keyword: Vision-Based Navigation

Search Result 195, Processing Time 0.025 seconds

Fuzzy Neural Network Based Sensor Fusion and It's Application to Mobile Robot in Intelligent Robotic Space

  • Jin, Tae-Seok;Lee, Min-Jung;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.293-298
    • /
    • 2006
  • In this paper, a sensor fusion based robot navigation method for the autonomous control of a miniature human interaction robot is presented. The method of navigation blends the optimality of the Fuzzy Neural Network(FNN) based control algorithm with the capabilities in expressing knowledge and learning of the networked Intelligent Robotic Space(IRS). States of robot and IR space, for examples, the distance between the mobile robot and obstacles and the velocity of mobile robot, are used as the inputs of fuzzy logic controller. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a sensor fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process. Preliminary experiment and results are shown to demonstrate the merit of the introduced navigation control algorithm.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision (도심 자율주행을 위한 비전기반 차선 추종주행 실험)

  • Suh, Seung-Beum;Kang, Yeon-Sik;Roh, Chi-Won;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Vision Based Map-Building Using Singular Value Decomposition Method for a Mobile Robot in Uncertain Environment

  • Park, Kwang-Ho;Kim, Hyung-O;Kee, Chang-Doo;Na, Seung-Yu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.101.1-101
    • /
    • 2001
  • This paper describes a grid mapping for a vision based mobile robot in uncertain indoor environment. The map building is a prerequisite for navigation of a mobile robot and the problem of feature correspondence across two images is well known to be of crucial Importance for vision-based mapping We use a stereo matching algorithm obtained by singular value decomposition of an appropriate correspondence strength matrix. This new correspondence strength means a correlation weight for some local measurements to quantify similarity between features. The visual range data from the reconstructed disparity image form an occupancy grid representation. The occupancy map is a grid-based map in which each cell has some value indicating the probability at that location ...

  • PDF

The Method of Virtual Reality-based Surgical Navigation to Reproduce the Surgical Plan in Spinal Fusion Surgery (척추 융합술에서 수술 계획을 재현하기 위한 가상현실 기반 수술 내비게이션 방법)

  • Song, Chanho;Son, Jaebum;Jung, Euisung;Lee, Hoyul;Park, Young-Sang;Jeong, Yoosoo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.8-15
    • /
    • 2022
  • In this paper, we proposed the method of virtual reality-based surgical navigation to reproduce the pre-planned position and angle of the pedicle screw in spinal fusion surgery. The goal of the proposed method is to quantitatively save the surgical plan by applying a virtual guide coordinate system and reproduce it in the surgical process through virtual reality. In the surgical planning step, the insertion position and angle of the pedicle screw are planned and stored based on the virtual guide coordinate system. To implement the virtual reality-based surgical navigation, a vision tracking system is applied to set the patient coordinate system and paired point-based patient-to-image registration is performed. In the surgical navigation step, the surgical plan is reproduced by quantitatively visualizing the pre-planned insertion position and angle of the pedicle screw using a virtual guide coordinate system. We conducted phantom experiment to verify the error between the surgical plan and the surgical navigation, the experimental result showed that target registration error was average 1.47 ± 0.64 mm when using the proposed method. We believe that our method can be used to accurately reproduce a pre-established surgical plan in spinal fusion surgery.

Design of Multi-Sensor-Based Open Architecture Integrated Navigation System for Localization of UGV

  • Choi, Ji-Hoon;Oh, Sang Heon;Kim, Hyo Seok;Lee, Yong Woo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.1 no.1
    • /
    • pp.35-43
    • /
    • 2012
  • The UGV is one of the special field robot developed for mine detection, surveillance and transportation. To achieve successfully the missions of the UGV, the accurate and reliable navigation data should be provided. This paper presents design and implementation of multi-sensor-based open architecture integrated navigation for localization of UGV. The presented architecture hierarchically classifies the integrated system into four layers and data communications between layers are based on the distributed object oriented middleware. The navigation manager determines the navigation mode with the QoS information of each navigation sensor and the integrated filter performs the navigation mode-based data fusion in the filtering process. Also, all navigation variables including the filter parameters and QoS of navigation data can be modified in GUI and consequently, the user can operate the integrated navigation system more usefully. The conventional GPS/INS integrated system does not guarantee the long-term reliability of localization when GPS solution is not available by signal blockage and intentional jamming in outdoor environment. The presented integration algorithm, however, based on the adaptive federated filter structure with FDI algorithm can integrate effectively the output of multi-sensor such as 3D LADAR, vision, odometer, magnetic compass and zero velocity to enhance the accuracy of localization result in the case that GPS is unavailable. The field test was carried out with the UGV and the test results show that the presented integrated navigation system can provide more robust and accurate localization performance than the conventional GPS/INS integrated system in outdoor environments.

LVLN : A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation (LVLN: 시각-언어 이동을 위한 랜드마크 기반의 심층 신경망 모델)

  • Hwang, Jisu;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.379-390
    • /
    • 2019
  • In this paper, we propose a novel deep neural network model for Vision-and-Language Navigation (VLN) named LVLN (Landmark-based VLN). In addition to both visual features extracted from input images and linguistic features extracted from the natural language instructions, this model makes use of information about places and landmark objects detected from images. The model also applies a context-based attention mechanism in order to associate each entity mentioned in the instruction, the corresponding region of interest (ROI) in the image, and the corresponding place and landmark object detected from the image with each other. Moreover, in order to improve the success rate of arriving the target goal, the model adopts a progress monitor module for checking substantial approach to the target goal. Conducting experiments with the Matterport3D simulator and the Room-to-Room (R2R) benchmark dataset, we demonstrate high performance of the proposed model.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

Navigation Characteristics of a Virtual Human using a Limited Perception-based Mapping (제한적 인지 기반의 맵핑을 이용한 가상인간의 항해 특성)

  • Han, Chang-Hee;Kim, Lae-Hyun;Kim, Tae-Woo
    • Journal of the Korea Society for Simulation
    • /
    • v.14 no.2
    • /
    • pp.93-103
    • /
    • 2005
  • This paper presents characteristics of a virtual human's navigation using a limited perception-based mapping. Previous approaches to virtual human navigation have used an omniscient perception requiring full layout of a virtual environment in advance. However, these approaches have a limitation on being a fundamental solution for a human-likeness of a virtual human, because behaviors of humans are basically based on their limited perception instead of omniscient perception. In this paper, we integrated Hill's mapping algorithm with a virtual human to experiment virtual human's navigation with the limited perception. This approach does not require full layout of the virtual environment, 360-degree's field of view, and vision through walls. In addition to static objects such as buildings, we consider enemy emergence that can affect an virtual human's navigation. The enemy emergence is used as the variable on the experiment of this present research. As the number of enemies varies, the changes in arrival rate and time taken to reach the goal position were observed. The virtual human navigates by two conditions. One is to take the shortest path to the goal position, and the other is to avoid enemies when the virtual human encounters them. The acquired result indicates that the virtual human's navigation corresponds to a human cognitive process, and thus this research can be a framework for human-likeness of virtual humans.

  • PDF