• Title/Summary/Keyword: Fusion accuracy

Search Result 538, Processing Time 0.023 seconds

Comparison of Spatio-temporal Fusion Models of Multiple Satellite Images for Vegetation Monitoring (식생 모니터링을 위한 다중 위성영상의 시공간 융합 모델 비교)

  • Kim, Yeseul;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1209-1219
    • /
    • 2019
  • For consistent vegetation monitoring, it is necessary to generate time-series vegetation index datasets at fine temporal and spatial scales by fusing the complementary characteristics between temporal and spatial scales of multiple satellite data. In this study, we quantitatively and qualitatively analyzed the prediction accuracy of time-series change information extracted from spatio-temporal fusion models of multiple satellite data for vegetation monitoring. As for the spatio-temporal fusion models, we applied two models that have been widely employed to vegetation monitoring, including a Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and an Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM). To quantitatively evaluate the prediction accuracy, we first generated simulated data sets from MODIS data with fine temporal scales and then used them as inputs for the spatio-temporal fusion models. We observed from the comparative experiment that ESTARFM showed better prediction performance than STARFM, but the prediction performance for the two models became degraded as the difference between the prediction date and the simultaneous acquisition date of the input data increased. This result indicates that multiple data acquired close to the prediction date should be used to improve the prediction accuracy. When considering the limited availability of optical images, it is necessary to develop an advanced spatio-temporal model that can reflect the suggestions of this study for vegetation monitoring.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Comparing Accuracy of Imputation Methods for Incomplete Categorical Data

  • Shin, Hyung-Won;Sohn, So-Young
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.237-242
    • /
    • 2003
  • Various kinds of estimation methods have been developed for imputation of categorical missing data. They include modal category method, logistic regression, and association rule. In this study, we propose two imputation methods (neural network fusion and voting fusion) that combine the results of individual imputation methods. A Monte-Carlo simulation is used to compare the performance of these methods. Five factors used to simulate the missing data are (1) true model for the data, (2) data size, (3) noise size (4) percentage of missing data, and (5) missing pattern. Overall, neural network fusion performed the best while voting fusion is better than the individual imputation methods, although it was inferior to the neural network fusion. Result of an additional real data analysis confirms the simulation result.

  • PDF

FS-Transformer: A new frequency Swin Transformer for multi-focus image fusion

  • Weiping Jiang;Yan Wei;Hao Zhai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1907-1928
    • /
    • 2024
  • In recent years, multi-focus image fusion has emerged as a prominent area of research, with transformers gaining recognition in the field of image processing. Current approaches encounter challenges such as boundary artifacts, loss of detailed information, and inaccurate localization of focused regions, leading to suboptimal fusion outcomes necessitating subsequent post-processing interventions. To address these issues, this paper introduces a novel multi-focus image fusion technique leveraging the Swin Transformer architecture. This method integrates a frequency layer utilizing Wavelet Transform, enhancing performance in comparison to conventional Swin Transformer configurations. Additionally, to mitigate the deficiency of local detail information within the attention mechanism, Convolutional Neural Networks (CNN) are incorporated to enhance region recognition accuracy. Comparative evaluations of various fusion methods across three datasets were conducted in the paper. The experimental findings demonstrate that the proposed model outperformed existing techniques, yielding superior quality in the resultant fused images.

Efficient Digitizing in Reverse Engineering By Sensor Fusion (역공학에서 센서융합에 의한 효율적인 데이터 획득)

  • Park, Young-Kun;Ko, Tae-Jo;Kim, Hrr-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.61-70
    • /
    • 2001
  • This paper introduces a new digitization method with sensor fusion for shape measurement in reverse engineering. Digitization can be classified into contact and non-contact type according to the measurement devices. Important thing in digitization is speed and accuracy. The former is excellent in speed and the latter is good for accuracy. Sensor fusion in digitization intends to incorporate the merits of both types so that the system can be automatized. Firstly, non-contact sensor with vision system acquires coarse 3D point data rapidly. This process is needed to identify and loco]ice the object located at unknown position on the table. Secondly, accurate 3D point data can be automatically obtained using scanning probe based on the previously measured coarse 3D point data. In the research, a great number of measuring points of equi-distance were instructed along the line acquired by the vision system. Finally, the digitized 3D point data are approximated to the rational B-spline surface equation, and the free-formed surface information can be transferred to a commercial CAD/CAM system via IGES translation in order to machine the modeled geometric shape.

  • PDF

Fast Cooperative Sensing with Low Overhead in Cognitive Radios

  • Dai, Zeyang;Liu, Jian;Li, Yunji;Long, Keping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.58-73
    • /
    • 2014
  • As is well known, cooperative sensing can significantly improve the sensing accuracy as compared to local sensing in cognitive radio networks (CRNs). However, a large number of cooperative secondary users (SUs) reporting their local detection results to the fusion center (FC) would cause much overhead, such as sensing delay and energy consumption. In this paper, we propose a fast cooperative sensing scheme, called double threshold fusion (DTF), to reduce the sensing overhead while satisfying a given sensing accuracy requirement. In DTF, FC respectively compares the number of successfully received local decisions and that of failed receptions with two different thresholds to make a final decision in each reporting sub-slot during a sensing process, where cooperative SUs sequentially report their local decisions in a selective fashion to reduce the reporting overhead. By jointly considering sequential detection and selective reporting techniques in DTF, the overhead of cooperative sensing can be significantly reduced. Besides, we study the performance optimization problems with different objectives for DTF and develop three optimum fusion rules accordingly. Simulation results reveal that DTF shows evident performance gains over an existing scheme.

Multi-Attribute Data Fusion for Energy Equilibrium Routing in Wireless Sensor Networks

  • Lin, Kai;Wang, Lei;Li, Keqiu;Shu, Lei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.1
    • /
    • pp.5-24
    • /
    • 2010
  • Data fusion is an attractive technology because it allows various trade-offs related to performance metrics, e.g., energy, latency, accuracy, fault-tolerance and security in wireless sensor networks (WSNs). Under a complicated environment, each sensor node must be equipped with more than one type of sensor module to monitor multi-targets, so that the complexity for the fusion process is increased due to the existence of various physical attributes. In this paper, we first investigate the process and performance of multi-attribute fusion in data gathering of WSNs, and then propose a self-adaptive threshold method to balance the different change rates of each attributive data. Furthermore, we present a method to measure the energy-conservation efficiency of multi-attribute fusion. Based on our proposed methods, we design a novel energy equilibrium routing method for WSNs, viz., multi-attribute fusion tree (MAFT). Simulation results demonstrate that MAFT achieves very good performance in terms of the network lifetime.

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

The Classification Accuracy Improvement of Satellite Imagery Using Wavelet Based Texture Fusion Image (웨이브릿 기반 텍스처 융합 영상을 이용한 위성영상 자료의 분류 정확도 향상 연구)

  • Hwang, Hwa-Jeong;Lee, Ki-Won;Kwon, Byung-Doo;Yoo, Hee-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.103-111
    • /
    • 2007
  • The spectral information based image analysis, visual interpretation and automatic classification have been widely carried out so far for remote sensing data processing. Yet recently, many researchers have tried to extract the spatial information which cannot be expressed directly in the image itself. Using the texture and wavelet scheme, we made a wavelet-based texture fusion image which includes the advantages of each scheme. Moreover, using these schemes, we carried out image classification for the urban spatial analysis and the geological structure analysis around the caldera area. These two case studies showed that image classification accuracy of texture image and wavelet-based texture fusion image is better than that of using only raw image. In case of the urban area using high resolution image, as both texture and wavelet based texture fusion image are added to the original image, the classification accuracy is the highest. Because detailed spatial information is applied to the urban area where detail pixel variation is very significant. In case of the geological structure analysis using middle and low resolution image, the images added by only texture image showed the highest classification accuracy. It is interpreted to be necessary to simplify the information such as elevation variation, thermal distribution, on the occasion of analyzing the relatively larger geological structure like a caldera. Therefore, in the image analysis using spatial information, each spatial information analysis method should be carefully selected by considering the characteristics of the satellite images and the purpose of study.

AGV Navigation Using a Space and Time Sensor Fusion of an Active Camera

  • Jin, Tae-Seok;Lee, Bong-Ki;Lee, Jang-Myung
    • Journal of Navigation and Port Research
    • /
    • v.27 no.3
    • /
    • pp.273-282
    • /
    • 2003
  • This paper proposes a sensor-fusion technique where rho data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent only on the current data sets. As the results, more of sensors are required to measure a certain physical promoter or to improve the accuracy of the measurement. However, in this approach, intend of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples md the effectiveness is proved through the simulation. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in the indoor environment and the performance was demonstrated by the real experiments.