• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.034 seconds

Design and Manufacture of Fusion Sensor Mechanism (융합센서 기구 설계 및 제작)

  • Shin, Seong-Yoon;Cho, Gwang-Hyun;Cho, Seung-Pyo;Shin, Kwang-Seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.418-419
    • /
    • 2022
  • In this paper, the design of the fusion sensor mechanism was divided into primary and secondary design, and the device design and case design were divided to facilitate the safety of the sensor, sensor coupling, and connection with power.

  • PDF

Neural Network Approach to Sensor Fusion System for Improving the Recognition Performance of 3D Objects (3차원 물체의 인식 성능 향상을 위한 감각 융합 신경망 시스템)

  • Dong Sung Soo;Lee Chong Ho;Kim Ji Kyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.156-165
    • /
    • 2005
  • Human being recognizes the physical world by integrating a great variety of sensory inputs, the information acquired by their own action, and their knowledge of the world using hierarchically parallel-distributed mechanism. In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile informations. The proposed system focuses on improving recognition performance of 3D objects. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse the two sensory signals. Tactual signals are obtained from the reaction force of the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of learning iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though the visual sensory signals get defects. The experimental results show that the proposed system can improve recognition rate and reduce teeming time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme for 3D objects.

Information-Theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking

  • Wang Hanbiao;Yao Kung;Estrin Deborah
    • Journal of Communications and Networks
    • /
    • v.7 no.4
    • /
    • pp.438-449
    • /
    • 2005
  • In this paper, we describes the information-theoretic approaches to sensor selection and sensor placement in sensor net­works for target localization and tracking. We have developed a sensor selection heuristic to activate the most informative candidate sensor for collaborative target localization and tracking. The fusion of the observation by the selected sensor with the prior target location distribution yields nearly the greatest reduction of the entropy of the expected posterior target location distribution. Our sensor selection heuristic is computationally less complex and thus more suitable to sensor networks with moderate computing power than the mutual information sensor selection criteria. We have also developed a method to compute the posterior target location distribution with the minimum entropy that could be achieved by the fusion of observations of the sensor network with a given deployment geometry. We have found that the covariance matrix of the posterior target location distribution with the minimum entropy is consistent with the Cramer-Rao lower bound (CRB) of the target location estimate. Using the minimum entropy of the posterior target location distribution, we have characterized the effect of the sensor placement geometry on the localization accuracy.

Motion Estimation of 3D Planar Objects using Multi-Sensor Data Fusion (센서 융합을 이용한 움직이는 물체의 동작예측에 관한 연구)

  • Yang, Woo-Suk
    • Journal of Sensor Science and Technology
    • /
    • v.5 no.4
    • /
    • pp.57-70
    • /
    • 1996
  • Motion can be estimated continuously from each sensor through the analysis of the instantaneous states of an object. This paper is aimed to introduce a method to estimate the general 3D motion of a planar object from the instantaneous states of an object using multi-sensor data fusion. The instantaneous states of an object is estimated using the linear feedback estimation algorithm. The motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown planar object. We present a fusion algorithm which combines averaging and deciding. With the assumption that the motion is smooth, the approach can handle the data sequences from multiple sensors with different sampling times. Simulation results show proposed algorithm is advantageous in terms of accuracy, speed, and versatility.

  • PDF

Radar and Vision Sensor Fusion for Primary Vehicle Detection (레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발)

  • Yang, Seung-Han;Song, Bong-Sob;Um, Jae-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

Study on Multiple Ground Target Tracking Algorithm Using Geographic Information (지형 정보를 사용한 다중 지상 표적 추적 알고리즘의 연구)

  • Kim, In-Taek;Lee, Eung-Gi
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.2
    • /
    • pp.173-180
    • /
    • 2000
  • During the last decade many researches have been working on multiple target tracking problem in the area of radar application, Various approaches have been proposed to solve the tracking problem and the concept of sensor fusion was established as an effort. In this paper utilization of geographic information for ground target tracking is investigated and performance comparison with the results of applying sensor fusion is described. Geographic information is used in three aspects: association masking target measurement and re-striction of removing true target. Simulation results indicate that using two sensors shows better performance with respect to tracking but a single with geographic information is a winner in reducing the number of false tracks.

  • PDF

ACCOUNTING FOR IMPORTANCE OF VARIABLES IN MUL TI-SENSOR DATA FUSION USING RANDOM FORESTS

  • Park No-Wook;Chi Kwang-Hoon
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.283-285
    • /
    • 2005
  • To account for the importance of variable in multi-sensor data fusion, random forests are applied to supervised land-cover classification. The random forests approach is a non-parametric ensemble classifier based on CART-like trees. Its distinguished feature is that the importance of variable can be estimated by randomly permuting the variable of interest in all the out-of-bag samples for each classifier. Supervised classification with a multi-sensor remote sensing data set including optical and polarimetric SAR data was carried out to illustrate the applicability of random forests. From the experimental result, the random forests approach could extract important variables or bands for land-cover discrimination and showed good performance, as compared with other non-parametric data fusion algorithms.

  • PDF

A study on the alignment of different sensor data with areial images and lidar data (항공영상과 라이다 자료를 이용한 이종센서 자료간의 alignment에 관한 연구)

  • 곽태석;이재빈;조현기;김용일
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.257-262
    • /
    • 2004
  • The purpose of data fusion is collecting maximized information from combining the data attained from more than two same or different kind sensor systems. Data fusion of same kind sensor systems like optical imagery has been on focus, but recently, LIDAR emerged as a new technology for capturing rapidally data on physical surfaces and the high accuray results derived from the LIDAR data. Considering the nature of aerial imagery and LIDAR data, it is clear that the two systems provide complementary information. Data fusion is consisted of two steps, alignment and matching. However, the complementary information can only be fully utilized after sucessful alignment of the aerial imagery and lidar data. In this research, deal with centroid of building extracted from lidar data as control information for estimating exterior orientation parameters of aerial imagery relative to the LIDAR reference frame.

  • PDF

A Novel Clustering Method with Time Interval for Context Inference based on the Multi-sensor Data Fusion (다중센서 데이터융합 기반 상황추론에서 시간경과를 고려한 클러스터링 기법)

  • Ryu, Chang-Keun;Park, Chan-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.3
    • /
    • pp.397-402
    • /
    • 2013
  • Time variation is the essential component of the context awareness. It is a beneficial way not only including time lapse but also clustering time interval for the context inference using the information from sensor mote. In this study, we proposed a novel way of clustering based multi-sensor data fusion for the context inference. In the time interval, we fused the sensed signal of each time slot, and fused again with the results of th first fusion. We could reach the enhanced context inference with assessing the segmented signal according to the time interval at the Dempster-Shafer evidence theory based multi-sensor data fusion.

An Effective Mapping for a Mobile Robot using Error Backpropagation based Sensor Fusion (오류 역전파 신경망 기반의 센서융합을 이용한 이동로봇의 효율적인 지도 작성)

  • Kim, Kyoung-Dong;Qu, Xiao-Chuan;Choi, Kyung-Sik;Lee, Suk-Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.9
    • /
    • pp.1040-1047
    • /
    • 2011
  • This paper proposes a novel method based on error back propagation neural networks to fuse laser sensor data and ultrasonic sensor data for enhancing the accuracy of mapping. For navigation of single robot, the robot has to know its initial position and accurate environment information around it. However, due to the inherent properties of sensors, each sensor has its own advantages and drawbacks. In our system, the robot equipped with seven ultrasonic sensors and a laser sensor navigates to map two different corridor environments. The experimental results show the effectiveness of the heterogeneous sensor fusion using an error backpropagation algorithm for mapping.