• 제목/요약/키워드: Multi-vision sensors

Search Result 58, Processing Time 0.023 seconds

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features (가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합)

  • Ahn, Sung-Hwan;Choi, Jin-Woo;Choi, Min-Yong;Chung, Wan-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.

Wireless Sensors Module for Remote Room Environment Monitoring

  • Lee, Dae-Seok;Chung, Wan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.449-452
    • /
    • 2005
  • For home networking system with a function of air quality monitoring, a wireless sensor module with several air quality monitoring sensors was developed for indoor environment monitoring system in home networking. The module has various enlargements for various kinds of sensors such as humidity sensor, temperature sensor, CO2 sensor, flying dust sensor, and etc. The developed wireless module is very convenient to be installed on the wall of a room or office, and the sensors in the module can be easily replaced due to well designed module structure and RF connection method. To reduce the system cost, only one RF transmission block was used for sensors' signal transmission to 8051 microcontroller board in time sharing method. In this home networking system, various indoor environmental parameters could be monitored in real time from RF wireless sensor module. Indoor vision was transferred to client PC or PDA from surveillance camera installed indoor or desired site. Web server using Oracle DB was used for saving the visions by web-camera and various data from wireless sensor module.

  • PDF

Asynchronous Sensor Fusion using Multi-rate Kalman Filter (다중주기 칼만 필터를 이용한 비동기 센서 융합)

  • Son, Young Seop;Kim, Wonhee;Lee, Seung-Hi;Chung, Chung Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.11
    • /
    • pp.1551-1558
    • /
    • 2014
  • We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

Study on Vision based Object Detection Algorithm for Passenger' s Safety in Railway Station (철도 승강장 승객안전을 위한 비전기반 물체 검지 알고리즘 연구)

  • Oh, Seh-Chan;Park, Sung-Hyuk;Jeong, Woo-Tae
    • Proceedings of the KSR Conference
    • /
    • 2008.06a
    • /
    • pp.553-558
    • /
    • 2008
  • Advancement in information technology have enabled applying vision sensor to railway, such as CCTV. CCTV has been widely used in railway application, however the CCTV is a passive system that provide limited capability to maintain safety from boarding platform. The station employee should monitor continuously CCTV monitors. Therefore immediate recognition and response to the situation is difficultin emergency situation. Recently, urban transit operators are pursuing applying an unattended station operation system for their cost reduction. Therefore, an intelligent monitoring system is need for passenger's safety in railway. The paper proposes a vision based monitoring system and object detection algorithm for passenger's safety in railway platform. The proposed system automatically detects accident in platform and analyzes level of danger using image processing technology. The system uses stereo vision technology with multi-sensors for minimizing detection error in various railway platform conditions.

  • PDF

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Localization and Autonomous Control of PETASUS System II for Manipulation in Structured Environment (구조화된 수중 환경에서 작업을 위한 PETASUS 시스템 II의 위치 인식 및 자율 제어)

  • Han, Jonghui;Ok, Jinsung;Chung, Wan Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.37-42
    • /
    • 2013
  • In this paper, a localization algorithm and an autonomous controller for PETASUS system II which is an underwater vehicle-manipulator system, are proposed. To estimate its position and to identify manipulation targets in a structured environment, a multi-rate extended Kalman filter is developed, where map information and data from inertial sensors, sonar sensors, and vision sensors are used. In addition, a three layered control structure is proposed as a controller for autonomy. By this controller, PETASUS system II is able to generate waypoints and make decisions on its own behaviors. Experiment results are provided for verifying proposed algorithms.