• Title/Summary/Keyword: Vision-Based Navigation

Search Result 193, Processing Time 0.017 seconds

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.

Design and Implementation of a Hardware Accelerator for Marine Object Detection based on a Binary Segmentation Algorithm for Ship Safety Navigation (선박안전 운항을 위한 이진 분할 알고리즘 기반 해상 객체 검출 하드웨어 가속기 설계 및 구현)

  • Lee, Hyo-Chan;Song, Hyun-hak;Lee, Sung-ju;Jeon, Ho-seok;Kim, Hyo-Sung;Im, Tae-ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1331-1340
    • /
    • 2020
  • Object detection in maritime means that the captain detects floating objects that has a risk of colliding with the ship using the computer automatically and as accurately as human eyes. In conventional ships, the presence and distance of objects are determined through radar waves. However, it cannot identify the shape and type. In contrast, with the development of AI, cameras help accurately identify obstacles on the sea route with excellent performance in detecting or recognizing objects. The computer must calculate high-volume pixels to analyze digital images. However, the CPU is specialized for sequential processing; the processing speed is very slow, and smooth service support or security is not guaranteed. Accordingly, this study developed maritime object detection software and implemented it with FPGA to accelerate the processing of large-scale computations. Additionally, the system implementation was improved through embedded boards and FPGA interface, achieving 30 times faster performance than the existing algorithm and a three-times faster entire system.