• Title/Summary/Keyword: heterogeneous data fusion

Search Result 27, Processing Time 0.024 seconds

A Study on a Multi-sensor Information Fusion Architecture for Avionics (항공전자 멀티센서 정보 융합 구조 연구)

  • Kang, Shin-Woo;Lee, Seoung-Pil;Park, Jun-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.777-784
    • /
    • 2013
  • Synthesis process from the data produced by different types of sensor into a single information is being studied and used in a variety of platforms in terms of multi-sensor data fusion. Heterogeneous sensors has been integrated into various aircraft and modern avionic systems manage them. As the performance of sensors in aircraft is getting higher, the integration of sensor information is required from the viewpoint of avionics gradually. Information fusion is not studied widely in the view of software that provide a pilot with fused information from data produced by the sensor in the form of symbology on a display device. The purpose of information fusion is to assist pilots to make a decision in order to perform mission by providing the correct combat situation from avionics of the aircraft and to minimize their workload consequently. In the aircraft avionics equipped with different types of sensors, the software architecture that produce a comprehensive information using the sensor data through multi-sensor data fusion process to the user is shown in this paper.

Online correction of drift in structural identification using artificial white noise observations and an unscented Kalman Filter

  • Chatzi, Eleni N.;Fuggini, Clemente
    • Smart Structures and Systems
    • /
    • v.16 no.2
    • /
    • pp.295-328
    • /
    • 2015
  • In recent years the monitoring of structural behavior through acquisition of vibrational data has become common practice. In addition, recent advances in sensor development have made the collection of diverse dynamic information feasible. Other than the commonly collected acceleration information, Global Position System (GPS) receivers and non-contact, optical techniques have also allowed for the synchronous collection of highly accurate displacement data. The fusion of this heterogeneous information is crucial for the successful monitoring and control of structural systems especially when aiming at real-time estimation. This task is not a straightforward one as measurements are inevitably corrupted with some percentage of noise, often leading to imprecise estimation. Quite commonly, the presence of noise in acceleration signals results in drifting estimates of displacement states, as a result of numerical integration. In this study, a new approach based on a time domain identification method, namely the Unscented Kalman Filter (UKF), is proposed for correcting the "drift effect" in displacement or rotation estimates in an online manner, i.e., on the fly as data is attained. The method relies on the introduction of artificial white noise (WN) observations into the filter equations, which is shown to achieve an online correction of the drift issue, thus yielding highly accurate motion data. The proposed approach is demonstrated for two cases; firstly, the illustrative example of a single degree of freedom linear oscillator is examined, where availability of acceleration measurements is exclusively assumed. Secondly, a field inspired implementation is presented for the torsional identification of a tall tower structure, where acceleration measurements are obtained at a high sampling rate and non-collocated GPS displacement measurements are assumed available at a lower sampling rate. A multi-rate Kalman Filter is incorporated into the analysis in order to successfully fuse data sampled at different rates.

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

Robust Data, Event, and Privacy Services in Real-Time Embedded Sensor Network Systems (실시간 임베디드 센서 네트워크 시스템에서 강건한 데이터, 이벤트 및 프라이버시 서비스 기술)

  • Jung, Kang-Soo;Kapitanova, Krasimira;Son, Sang-H.;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.324-332
    • /
    • 2010
  • The majority of event detection in real-time embedded sensor network systems is based on data fusion that uses noisy sensor data collected from complicated real-world environments. Current research has produced several excellent low-level mechanisms to collect sensor data and perform aggregation. However, solutions that enable these systems to provide real-time data processing using readings from heterogeneous sensors and subsequently detect complex events of interest in real-time fashion need further research. We are developing real-time event detection approaches which allow light-weight data fusion and do not require significant computing resources. Underlying the event detection framework is a collection of real-time monitoring and fusion mechanisms that are invoked upon the arrival of sensor data. The combination of these mechanisms and the framework has the potential to significantly improve the timeliness and reduce the resource requirements of embedded sensor networks. In addition to that, we discuss about a privacy that is foundation technique for trusted embedded sensor network system and explain anonymization technique to ensure privacy.

Multimodality Image Registration and Fusion using Feature Extraction (특징 추출을 이용한 다중 영상 정합 및 융합 연구)

  • Woo, Sang-Keun;Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.123-130
    • /
    • 2007
  • The aim of this study was to propose a fusion and registration method with heterogeneous small animal acquisition system in small animal in-vivo study. After an intravenous injection of $^{18}F$-FDG through tail vain and 60 min delay for uptake, mouse was placed on an acryl plate with fiducial markers that were made for fusion between small animal PET (microPET R4, Concorde Microsystems, Knoxville TN) and Discovery LS CT images. The acquired emission list-mode data was sorted to temporally framed sinograms and reconstructed using FORE rebining and 2D-OSEM algorithms without correction of attenuation and scatter. After PET imaging, CT images were acquired by mean of a clinical PET/CT with high-resolution mode. The microPET and CT images were fusion and co-registered using the fiducial markers and segmented lung region in both data sets to perform a point-based rigid co-registration. This method improves the quantitative accuracy and interpretation of the tracer.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Adaptive boosting in ensembles for outlier detection: Base learner selection and fusion via local domain competence

  • Bii, Joash Kiprotich;Rimiru, Richard;Mwangi, Ronald Waweru
    • ETRI Journal
    • /
    • v.42 no.6
    • /
    • pp.886-898
    • /
    • 2020
  • Unusual data patterns or outliers can be generated because of human errors, incorrect measurements, or malicious activities. Detecting outliers is a difficult task that requires complex ensembles. An ideal outlier detection ensemble should consider the strengths of individual base detectors while carefully combining their outputs to create a strong overall ensemble and achieve unbiased accuracy with minimal variance. Selecting and combining the outputs of dissimilar base learners is a challenging task. This paper proposes a model that utilizes heterogeneous base learners. It adaptively boosts the outcomes of preceding learners in the first phase by assigning weights and identifying high-performing learners based on their local domains, and then carefully fuses their outcomes in the second phase to improve overall accuracy. Experimental results from 10 benchmark datasets are used to train and test the proposed model. To investigate its accuracy in terms of separating outliers from inliers, the proposed model is tested and evaluated using accuracy metrics. The analyzed data are presented as crosstabs and percentages, followed by a descriptive method for synthesis and interpretation.

RESOURCE ORIENTED ARCHITECTURE FOR MUTIMEDIA SENSOR NETWORKS IWAIT2009

  • Iwatani, Hiroshi;Nakatsuka, Masayuki;Takayanagi, Yutaro;Katto, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.456-459
    • /
    • 2009
  • Sensor network has been a hot research topic for the past decade and has moved its phase into using multimedia sensors such as cameras and microphones [1]. Combining many types of sensor data will lead to more accurate and precise information of the environment. However, the use of sensor network data is still limited to closed circumstances. Thus, in this paper, we propose a web-service based framework to deploy multimedia sensor networks. In order to unify different types of sensor data and also to support heterogeneous client applications, we used ROA (Resource Oriented Architecture [2]).

  • PDF

Experimental Research on Radar and ESM Measurement Fusion Technique Using Probabilistic Data Association for Cooperative Target Tracking (협동 표적 추적을 위한 확률적 데이터 연관 기반 레이더 및 ESM 센서 측정치 융합 기법의 실험적 연구)

  • Lee, Sae-Woom;Kim, Eun-Chan;Jung, Hyo-Young;Kim, Gi-Sung;Kim, Ki-Seon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.5C
    • /
    • pp.355-364
    • /
    • 2012
  • Target processing mechanisms are necessary to collect target information, real-time data fusion, and tactical environment recognition for cooperative engagement ability. Among these mechanisms, the target tracking starts from predicting state of speed, acceleration, and location by using sensors' measurements. However, it can be a problem to give the reliability because the measurements have a certain uncertainty. Thus, a technique which uses multiple sensors is needed to detect the target and increase the reliability. Also, data fusion technique is necessary to process the data which is provided from heterogeneous sensors for target tracking. In this paper, a target tracking algorithm is proposed based on probabilistic data association(PDA) by fusing radar and ESM sensor measurements. The radar sensor's azimuth and range measurements and the ESM sensor's bearing-only measurement are associated by the measurement fusion method. After gating associated measurements, state estimation of the target is performed by PDA filter. The simulation results show that the proposed algorithm provides improved estimation under linear and circular target motions.

Study on Seabed Mapping using Two Sonar Devices for AUV Application (복수의 수중 소나를 활용한 수중 로봇의 3차원 지형 맵핑에 관한 연구)

  • Joe, Hangil;Yu, Son-Cheol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.94-102
    • /
    • 2021
  • This study addresses a method for 3D reconstruction using acoustic data with heterogeneous sonar devices: Forward-Looking Multibeam Sonar (FLMS) and Profiling Sonar (PS). The challenges in sonar image processing are perceptual ambiguity, the loss of elevation information, and low signal to noise ratio, which are caused by the ranging and intensity-based image generation mechanism of sonars. The conventional approaches utilize additional constraints such as Lambertian reflection and redundant data at various positions, but they are vulnerable to environmental conditions. Our approach is to use two sonars that have a complementary data type. Typically, the sonars provide reliable information in the horizontal but, the loss of elevation information degrades the quality of data in the vertical. To overcome the characteristic of sonar devices, we adopt the crossed installation in such a way that the PS is laid down on its side and mounted on the top of FLMS. From the installation, FLMS scans horizontal information and PS obtains a vertical profile of the front area of AUV. For the fusion of the two sonar data, we propose the probabilistic approach. A likelihood map using geometric constraints between two sonar devices is built and a monte-carlo experiment using a derived model is conducted to extract 3D points. To verify the proposed method, we conducted a simulation and field test. As a result, a consistent seabed map was obtained. This method can be utilized for 3D seabed mapping with an AUV.