• Title/Summary/Keyword: Camera-based navigation

Search Result 205, Processing Time 0.027 seconds

The Design and Implementation Android OS Based Portable Navigation System For Visually Impaired Person and N : N Service (시각 장애인을 위한 Android OS 기반의 Portable Navigation System 설계 및 구현 과 N : N Service)

  • Kong, Sung-Hun;Kim, Young-Kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.327-330
    • /
    • 2012
  • In the rapid growth of cities, road has heavy traffic and many buildings are under constructions. These kinds of environments make more difficulty for a person who is visually handicapped to walk comfortable. To alleviate the problem, we introduce Android based Portable Navigation System to help walking for Visually Impaired Person. It follows, service center give instant real time monitoring to visually impaired person for their convenient by this system. Android based Portable Navigation System has GPS, Camera, Audio and WI-FI(wireless fidelity) available. It means that GPS location and Camera image information can be sent to service center by WI-FI network. To be specific, transmitted GPS location information enables service center to figure out the visually impaired person's whereabouts and mark the location on the map. By delivered Camera image information, service center monitors the visually impaired person's view. Also, they can offer live guidance to visually impaired person by equipped Audio with live talking. To sum up, Android based Portable Navigation System is a specialized navigation system that gives practical effect to realize more comfortable walking for visually impaired person.

  • PDF

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.

The Design and Implementation Navigation System For Visually Impaired Person (시각 장애인을 위한 Navigation System의 설계 및 구현)

  • Kong, Sung-Hun;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.12
    • /
    • pp.2702-2707
    • /
    • 2012
  • In the rapid growth of cities, road has heavy traffic and many buildings are under constructions. These kinds of environments make more difficulty for a person who is visually handicapped to walk comfortable. To alleviate the problem, we introduce Navigation System to help walking for Visually Impaired Person. It follows, service center give instant real time monitoring to visually impaired person for their convenient by this system. This Navigation System has GPS, Camera, Audio and Wi-Fi(wireless fidelity) available. It means that GPS location and Camera image information can be sent to service center by Wi-Fi network. To be specific, transmitted GPS location information enables service center to figure out the visually impaired person's whereabouts and mark the location on the map. By delivered Camera image information, service center monitors the visually impaired person's view. Also, they can offer live guidance to visually impaired person by equipped Audio with live talking. To sum up, Android based Portable Navigation System is a specialized navigation system that gives practical effect to realize more comfortable walking for visually impaired person.

A Hybrid of Smartphone Camera and Basestation Wide-area Indoor Positioning Method

  • Jiao, Jichao;Deng, Zhongliang;Xu, Lianming;Li, Fei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.723-743
    • /
    • 2016
  • Indoor positioning is considered an enabler for a variety of applications, the demand for an indoor positioning service has also been accelerated. That is because that people spend most of their time indoor environment. Meanwhile, the smartphone integrated powerful camera is an efficient platform for navigation and positioning. However, for high accuracy indoor positioning by using a smartphone, there are two constraints that includes: (1) limited computational and memory resources of smartphone; (2) users' moving in large buildings. To address those issues, this paper uses the TC-OFDM for calculating the coarse positioning information includes horizontal and altitude information for assisting smartphone camera-based positioning. Moreover, a unified representation model of image features under variety of scenarios whose name is FAST-SURF is established for computing the fine location. Finally, an optimization marginalized particle filter is proposed for fusing the positioning information from TC-OFDM and images. The experimental result shows that the wide location detection accuracy is 0.823 m (1σ) at horizontal and 0.5 m at vertical. Comparing to the WiFi-based and ibeacon-based positioning methods, our method is powerful while being easy to be deployed and optimized.

Design and Implementation of PDA-based Image Surveillance System for Harbor Security using IP Camera

  • Shim, Joon-Hwan
    • Journal of Navigation and Port Research
    • /
    • v.31 no.9
    • /
    • pp.779-784
    • /
    • 2007
  • This paper describes a new progressive embedded Internet Protocol(IP) camera available for real-time image transmission. It was applied for ship safety and security on seashore area. The functions of developed embedded system was more effective and excellent than conventional systems. Nowadays, each nation has established harbor security systems to jump up their ports to international port. Recently Incheon port has tried to change man security into center security system used by image security system. The security system of Incheon port has some advantages like effectivity of security system and reduction of manpower and cost, installed by image security system with CCTV cameras at the entrance gate and around the fence. Thus in this paper, we have designed and implemented a Personal Digital Assistants(PDA) based Image Surveillance System for Harbor Security using IP Camera under ubiquitous environment. This system has big advantages which are more effective in an emergency and low cost and small manpower than conventional systems.

Performance Evaluation of a Compressed-State Constraint Kalman Filter for a Visual/Inertial/GNSS Navigation System

  • Yu Dam Lee;Taek Geun Lee;Hyung Keun Lee
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.2
    • /
    • pp.129-140
    • /
    • 2023
  • Autonomous driving systems are likely to be operated in various complex environments. However, the well-known integrated Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS), which is currently the major source for absolute position information, still has difficulties in accurate positioning in harsh signal environments such as urban canyons. To overcome these difficulties, integrated Visual/Inertial/GNSS (VIG) navigation systems have been extensively studied in various areas. Recently, a Compressed-State Constraint Kalman Filter (CSCKF)-based VIG navigation system (CSCKF-VIG) using a monocular camera, an Inertial Measurement Unit (IMU), and GNSS receivers has been studied with the aim of providing robust and accurate position information in urban areas. For this new filter-based navigation system, on the basis of time-propagation measurement fusion theory, unnecessary camera states are not required in the system state. This paper presents a performance evaluation of the CSCKF-VIG system compared to other conventional navigation systems. First, the CSCKF-VIG is introduced in detail compared to the well-known Multi-State Constraint Kalman Filter (MSCKF). The CSCKF-VIG system is then evaluated by a field experiment in different GNSS availability situations. The results show that accuracy is improved in the GNSS-degraded environment compared to that of the conventional systems.

A Study on Development of Visual Navigation System based on Neural Network Learning

  • Shin, Suk-Young;Lee, Jang-Hee;You, Yang-Jun;Kang, Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads without any specific marks such as painted guide line or tape. In this method the robot navigates with visual sensors, which uses visual information to navigate itself along the read. The Neural Network System was used to learn driving pattern and decide where to move. In this paper, I will present a vision-based process for AMR(Autonomous Mobile Robot) that is able to navigate on the indoor read with simple computation. We used a single USB-type web camera to construct smaller and cheaper navigation system instead of expensive CCD camera.

A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker (캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종)

  • Lee, S.J.;Won, Mooncheol
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

An Estimation Method of Drivable Path for Unmanned Ground Vehicle Using Camera and 2D Laser Rangefinder on Unpaved Road (카메라와 2차원 레이저 거리센서를 활용한 비포장 도로 환경에서의 지상무인차량의 주행가능영역 추정 기법)

  • Ahn, Seong-Yong;Kim, Chong-Hui;Choe, Tok-Son;Park, Yong-Woon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.6
    • /
    • pp.993-1001
    • /
    • 2011
  • Unmanned ground vehicle for facility protection mostly uses model of territory for autonomous navigation. However, modeling of territory using several sensors is highly time consuming and sometimes inefficient for road application. Therefore, an estimation of drivable path based on features of road is required for high speed autonomous navigation on road. In this paper, an estimation method of drivable path using camera and 2D laser rangefinder is proposed. First, a vanishing point is estimated based on image data from CCD camera. Second, a road width is estimated based on range data from 2D laser rangefinder. Finally, the drivable path is estimated by fusing the vanishing point and the road width. The proposed method is tested on both well-structured road and unpaved road like cross-country situation.

Mobile Robot Navigation using Data Fusion Based on Camera and Ultrasonic Sensors Algorithm (카메라와 초음파센서 융합에 의한이동로봇의 주행 알고리즘)

  • Jang, Gi-Dong;Park, Sang-Keon;Han, Sung-Min;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.5
    • /
    • pp.696-704
    • /
    • 2011
  • In this paper, we propose a mobile robot navigation algorithm using data fusion of a monocular camera and ultrasonic sensors. Threshold values for binary image processing are generated by a fuzzy inference method using image data and data of ultrasonic sensors. Threshold value variations improve obstacle detection for mobile robot to move to the goal under poor illumination environments. Obstacles detected by data fusion of camera and ultrasonic sensors are expressed on the grid map and avoided using the circular planning algorithm. The performance of the proposed method is evaluated by experiments on the Pioneer 2-DX mobile robot in the indoor room with poor lights and a narrow corridor.