• Title/Summary/Keyword: multi-sensor information fusion

Search Result 116, Processing Time 0.026 seconds

Multi-Sensor Image Alignment By Statistical Correlation (통계적 Correlation을 이용한 다중센서 영상 정합)

  • 고진신;박영태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.586-588
    • /
    • 2003
  • 현재 많이 연구되는 영상융합(Image fusion)에서는 필히 두 영상의 정합(alignment)이 이루어져야만 수행된다. 각기 다른 특징을 갖는 센서(EO.IR.Radar등)로부터 얻는 영상에서는 각각 다른 특징점 정보를 가지므로, 특징점을 이용한 영상 정합 구현에는 전처리 과정이 매우 복잡하고 까다롭게 이루어져야 한다. 본 논문에서는 Correlation에 대한 통계적 상관 관계를 이용하여. 전처리 과정을 단순하게 수행 하여도 매우 강건한 영상 정합이 이루어지도록 구현 하였다. 또한, 통계적 기법에 적합하도록, 효율적인 전처리 과정을 통해 계산량이 적어 지는 방법을 제안 한다.

  • PDF

Design and Implementation of Multi-Sensor-based Vehicle Localization and Tracking System (멀티센서 기반 차량 위치인식 시스템의 설계 및 구현)

  • Jang, Yoon-Ho;Nam, Sang-Kyoon;Bae, Sang-Jun;Sung, Tae-Kyung;Kwak, Kyung-Sup
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.6
    • /
    • pp.121-130
    • /
    • 2009
  • In this paper, Gaussian probability distribution model based multi-sensor data fusion algorithm is proposed for a vehicular location awareness system. Conventional vehicular location awareness systems are operated by GPS (Global Positioning System). However, the conventional system is not working in the indoor of building or urban area where the receiver is difficult to receive the signal from satellites. A method which is combined GPS and UWB (Ultra Wide-Band) has developed to improve this problem. However, vehicular is difficult to receive seamless location information since the measurement systems by both GPS and UWB convert the vehicle's movement information separately at each sensor. In this paper, normalized probability distribution model based Hybrid UWB/GPS is proposed by utilizing GPS location data and UWB sensor data. Therefore the proposed system provides information with seamless and location flexible properties. The proposed system tested by Ubisense and Asen GPS in the $12m{\times}8m$ outdoor environments. As a result, the proposed system has improved performance for accurateness and connection ability between devices to support various CNS (Car Navigation System).

  • PDF

Performance Enhancement of Attitude Estimation using Adaptive Fuzzy-Kalman Filter (적응형 퍼지-칼만 필터를 이용한 자세추정 성능향상)

  • Kim, Su-Dae;Baek, Gyeong-Dong;Kim, Tae-Rim;Kim, Sung-Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2511-2520
    • /
    • 2011
  • This paper describes the parameter adjustment method of fuzzy membership function to improve the performance of multi-sensor fusion system using adaptive fuzzy-Kalman filter and cross-validation. The adaptive fuzzy-Kanlman filter has two input parameters, variation of accelerometer measurements and residual error of Kalman filter. The filter estimates system noise R and measurement noise Q, then changes the Kalman gain. To evaluate proposed adaptive fuzzy-Kalman filter, we make the two-axis AHRS(Attitude Heading Reference System) using fusion of an accelerometer and a gyro sensor. Then we verified its performance by comparing to NAV420CA-100 to be used in various fields of airborne, marine and land applications.

Intelligent Traffic Prediction by Multi-sensor Fusion using Multi-threaded Machine Learning

  • Aung, Swe Sw;Nagayama, Itaru;Tamaki, Shiro
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.6
    • /
    • pp.430-439
    • /
    • 2016
  • Estimation and analysis of traffic jams plays a vital role in an intelligent transportation system and advances safety in the transportation system as well as mobility and optimization of environmental impact. For these reasons, many researchers currently mainly focus on the brilliant machine learning-based prediction approaches for traffic prediction systems. This paper primarily addresses the analysis and comparison of prediction accuracy between two machine learning algorithms: Naïve Bayes and K-Nearest Neighbor (K-NN). Based on the fact that optimized estimation accuracy of these methods mainly depends on a large amount of recounted data and that they require much time to compute the same function heuristically for each action, we propose an approach that applies multi-threading to these heuristic methods. It is obvious that the greater the amount of historical data, the more processing time is necessary. For a real-time system, operational response time is vital, and the proposed system also focuses on the time complexity cost as well as computational complexity. It is experimentally confirmed that K-NN does much better than Naïve Bayes, not only in prediction accuracy but also in processing time. Multi-threading-based K-NN could compute four times faster than classical K-NN, whereas multi-threading-based Naïve Bayes could process only twice as fast as classical Bayes.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Object Region Detection using Multi-Sensor Fusion and Background Estimation (다중센서 융합과 배경 추정을 이용한 물체 영역 검출)

  • 조주현;최해철;이진성;신호철;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.443-446
    • /
    • 2001
  • 본 논문에서는 센서 융합과 배경 추정 기법을 이용하여 연속된 영상에서 물체 영역을 검출하는 기법을 제안하였다. IR/CCD각각의 카메라로부터 얻은 입력 영상을 정렬하고 융합하는 과정을 거친 후, 각 화소 단위의 배경 모델을 추정하고 시간이 지남에 따라 이를 갱신함으로써 물체 영역을 효과적으로 검출하는 기법을 제시하고 있다. 실험은 차량을 대상으로 하였고, 카메라가 움직이는 상황과 비교적 복잡한 환경에서도 좋은 결과를 얻을 수 있었다.

  • PDF

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF