• 제목/요약/키워드: Multi-sensor Fusion

검색결과 201건 처리시간 0.026초

다중센서 오차특성을 고려한 융합 알고리즘 (A Fusion Algorithm considering Error Characteristics of the Multi-Sensor)

  • 현대환;윤희병
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제36권4호
    • /
    • pp.274-282
    • /
    • 2009
  • 기동물체 추적을 위해서 GPS, INS, 레이더 및 광학장비 등의 다양한 위치추적 센서가 이용되고 있으며, 기동물체의 강인한 추적성능을 유지하기 위해 이기종 센서의 효과적인 융합방법이 필요하다. 이기종 다중센서를 이용한 추적성능 향상을 위해 센서의 서로 다른 오차특성을 고려하여 각 센서의 측정치를 상이한 모델로 간주하여 융합하는 연구가 수행되었지만, 한 센서의 오차가 급격히 증가하는 구간에서 다른 센서의 추정치에 대한 오차가 증가하고 각 센서의 측정값이 참 값일 확률인 Sensor Probability 값에 대해 센서 측정치 변화를 실시간으로 반영하지 못하였다. 본 논문에서는 각 센서 칼만필터의 갱신추정치와 측정치 간의 차이에 대한 RMSE(Root Mean Square Error)를 비교하여 Sensor Probability를 구하고, 결합추정치를 다시 각 센서 칼만필터 입력값으로 대입하는 과정을 제외하여 센서 측정치에 대한 실시간적인 반영과 센서 성능이 급격히 저하되는 구간에서의 추적성능을 개선한다. 제안하는 알고리즘은 각 센서의 오차특성을 조건부 확률값으로 추가하여 각 센서의 Sensor Probability에 따라 가장 양호한 성능을 보이는 센서 위주로 트랙융합을 함으로써 강인성을 보장 한다. 실험을 통해 UAV의 기동 경로를 생성하고 제안 알고리즘을 적용하여 다른 융합 알고리즘과 성능분석을 실시한다.

이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합 (Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot)

  • 김민영;안상태;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

협업기반 상황인지를 위한 u-Surveillance 다중센서 스테이션 개발 (Development of Multi-Sensor Station for u-Surveillance to Collaboration-Based Context Awareness)

  • 유준혁;김희철
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.780-786
    • /
    • 2012
  • Surveillance has become one of promising application areas of wireless sensor networks which allow for pervasive monitoring of concerned environmental phenomena by facilitating context awareness through sensor fusion. Existing systems that depend on a postmortem context analysis of sensor data on a centralized server expose several shortcomings, including a single point of failure, wasteful energy consumption due to unnecessary data transfer as well as deficiency of scalability. As an opposite direction, this paper proposes an energy-efficient distributed context-aware surveillance in which sensor nodes in the wireless sensor network collaborate with neighbors in a distributed manner to analyze and aware surrounding context. We design and implement multi-modal sensor stations for use as sensor nodes in our wireless sensor network implementing our distributed context awareness. This paper presents an initial experimental performance result of our proposed system. Results show that multi-modal sensor performance of our sensor station, a key enabling factor for distributed context awareness, is comparable to each independent sensor setting. They also show that its initial performance of context-awareness is satisfactory for a set of introductory surveillance scenarios in the current interim stage of our ongoing research.

다중 센서 및 다중 전술데이터링크 환경 하에서의 표적정보 처리 기법 (Multi Sources Track Management Method for Naval Combat Systems)

  • 이호철;김태수;신형조
    • 제어로봇시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.126-131
    • /
    • 2014
  • This paper is concerned with a track management method for a naval combat system which receives the tracks information from multi-sensors and multi-tactical datalinks. Since the track management of processing the track information from diverse sources can be formulated as a data fusion problem, this paper will deal with the data fusion architecture, track association and track information determination algorithm for the track management of naval combat systems.

A study on aerial triangulation from multi-sensor imagery

  • Lee, Young-ran;Habib, Ayman;Kim, Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.400-406
    • /
    • 2002
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is performed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with frame imagery and vise versa. The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

  • PDF

A Study on Aerial Triangulation from Multi-Sensor Imagery

  • Lee, Young-Ran;Habib, Ayman;Kim, Kyung-Ok
    • 대한원격탐사학회지
    • /
    • 제19권3호
    • /
    • pp.255-261
    • /
    • 2003
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is purformed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with other sensors The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

스마트팩토리 실현을 위한 다중센서기반 모바일로봇의 위치 및 자세제어에 관한 연구 (A Study on Orientation and Position Control of Mobile Robot Based on Multi-Sensors Fusion for Implimentation of Smart FA)

  • 동근한;김희진;배호영;김상현;백영태;한성현
    • 한국산업융합학회 논문집
    • /
    • 제22권2호
    • /
    • pp.209-218
    • /
    • 2019
  • This study proposes a new approach to Control the Orientation and position based on obstacle avoidance technology by multi sensors fusion and autonomous travelling control of mobile robot system for implimentation of Smart FA. The important focus is to control mobile robot based on by the multiple sensor module for autonomous travelling and obstacle avoidance of proposed mobile robot system, and the multiple sensor module is consit with sonar sensors, psd sensors, color recognition sensors, and position recognition sensors. Especially, it is proposed two points for the real time implementation of autonomous travelling control of mobile robot in limited manufacturing environments. One is on the development of the travelling trajectory control algorithm which obtain accurate and fast in considering any constraints. such as uncertain nonlinear dynamic effects. The other is on the real time implementation of obstacle avoidance and autonomous travelling control of mobile robot based on multiple sensors. The reliability of this study has been illustrated by the computer simulation and experiments for autonomous travelling control and obstacle avoidance.

다중 감지기 시스템 하에서의 입력 추정 필터 구현 (Input Estimation in Multi-Sensor Environment)

  • 박용환;황익호;윤장현;서진헌
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1995년도 하계학술대회 논문집 B
    • /
    • pp.699-701
    • /
    • 1995
  • An input estimation technique is derived in multi-sensor environment. The proposed approach distribute the computational burden of input estimation to each local sensor and fusion center without loss of its optimality. The performances of proposed method in 2-sensor system are compared with those in single sensor system. Simulation results show that a reliable maneuvering target tracking system can be constructed in multi-sensor environment via proposed approach.

  • PDF

재실 감지 센서를 이용한 다용도 스마트 센서 개발 (Development of Multi-purpose Smart Sensor Using Presence Sensor)

  • 차주헌;용흥
    • 한국생산제조학회지
    • /
    • 제24권1호
    • /
    • pp.103-109
    • /
    • 2015
  • This paper introduces a multi-purpose smart fusion sensor. Normally, this type of sensor can contribute to energy savings specifically related to lighting and heating/air conditioning systems by detecting individuals in an office building. If a fire occurs, the sensor can provide information regarding the presence and location of residents in the building to a management center. The system consists of four sensors: a thermopile sensor for detecting heat energy, an ultrasonic sensor for measuring the distance of objects from the sensor, a fire detection sensor, and a passive infrared sensor for detecting temperature change. The system has a wireless communication module to provide the management center with control information for lighting and heating/air conditioning systems. We have also demonstrated the usefulness of the proposed system by applying it to a real environment.