• Title/Summary/Keyword: Information Fusion

Search Result 1,868, Processing Time 0.033 seconds

Rank-level Fusion Method That Improves Recognition Rate by Using Correlation Coefficient (상관계수를 이용하여 인식률을 향상시킨 rank-level fusion 방법)

  • Ahn, Jung-ho;Jeong, Jae Yeol;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.5
    • /
    • pp.1007-1017
    • /
    • 2019
  • Currently, most biometrics system authenticates users by using single biometric information. This method has many problems such as noise problem, sensitivity to data, spoofing, a limitation of recognition rate. One method to solve this problems is to use multi biometric information. The multi biometric authentication system performs information fusion for each biometric information to generate new information, and then uses the new information to authenticate the user. Among information fusion methods, a score-level fusion method is widely used. However, there is a problem that a normalization operation is required, and even if data is same, the recognition rate varies depending on the normalization method. A rank-level fusion method that does not require normalization is proposed. However, a existing rank-level fusion methods have lower recognition rate than score-level fusion methods. To solve this problem, we propose a rank-level fusion method with higher recognition rate than a score-level fusion method using correlation coefficient. The experiment compares recognition rate of a existing rank-level fusion methods with the recognition rate of proposed method using iris information(CASIA V3) and face information(FERET V1). We also compare with score-level fusion methods. As a result, the recognition rate improve from about 0.3% to 3.3%.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

Lane Information Fusion Scheme using Multiple Lane Sensors (다중센서 기반 차선정보 시공간 융합기법)

  • Lee, Soomok;Park, Gikwang;Seo, Seung-woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.142-149
    • /
    • 2015
  • Most of the mono-camera based lane detection systems are fragile on poor illumination conditions. In order to compensate limitations of single sensor utilization, lane information fusion system using multiple lane sensors is an alternative to stabilize performance and guarantee high precision. However, conventional fusion schemes, which only concerns object detection, are inappropriate to apply to the lane information fusion. Even few studies considering lane information fusion have dealt with limited aids on back-up sensor or omitted cases of asynchronous multi-rate and coverage. In this paper, we propose a lane information fusion scheme utilizing multiple lane sensors with different coverage and cycle. The precise lane information fusion is achieved by the proposed fusion framework which considers individual ranging capability and processing time of diverse types of lane sensors. In addition, a novel lane estimation model is proposed to synchronize multi-rate sensors precisely by up-sampling spare lane information signals. Through quantitative vehicle-level experiments with around view monitoring system and frontal camera system, we demonstrate the robustness of the proposed lane fusion scheme.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

A efficient Rank-level fusion method improving recognition rate (인식률을 향상시키는 효과적인 Rank-level fusion 방법)

  • Ahn, Jung-Ho;Kwon, Taeyean;Noh, Geontae;Jeong, Ik Rae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.312-314
    • /
    • 2017
  • 생체정보를 이용한 사용자 인증은 차세대 인증 방법으로서 기존의 인증 시스템에서 급진적으로 사용되고 있는 인증 방법이다. 현재 대부분의 생체인증 시스템은 단일 생체정보를 이용하고 있는데, 단일 생체인증 시스템은 노이즈로 인한 문제, 데이터의 질에 대한 문제, 인식률의 한계 등 많은 문제점들을 가지고 있다. 이를 해결하기 위한 방법으로 다중 생체정보를 이용하는 사용자 인증 방법이 있다. 다중 생체인증 시스템은 각각의 정보에 대한 information fusion을 적용하여 새로운 정보를 생성한 뒤, 그 정보를 기반으로 사용자를 인증한다. information fusion 방법들 중에서도 Rank-level fusion 방법은 표준화 작업이 필요하고 높은 계산 복잡도를 갖는 Score-level fusion방법의 대안으로 선택되고 있다. 따라서 본 논문에서는 기존 방법보다 정확도가 높게 향상된 Rank-level fusion 방법을 제안한다. 또한, 본 논문에서 제안하는 방법은 낮은 정확도를 갖는 matcher를 사용하더라도 정확도를 향상시킬 수 있음을 실험을 통해 보이고자 한다.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

The Sensory-Motor Fusion System for Object Tracking (이동 물체를 추적하기 위한 감각 운동 융합 시스템 설계)

  • Lee, Sang-Hee;Wee, Jae-Woo;Lee, Chong-Ho
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.3
    • /
    • pp.181-187
    • /
    • 2003
  • For the moving objects with environmental sensors such as object tracking moving robot with audio and video sensors, environmental information acquired from sensors keep changing according to movements of objects. In such case, due to lack of adaptability and system complexity, conventional control schemes show limitations on control performance, and therefore, sensory-motor systems, which can intuitively respond to various types of environmental information, are desirable. And also, to improve the system robustness, it is desirable to fuse more than two types of sensory information simultaneously. In this paper, based on Braitenberg's model, we propose a sensory-motor based fusion system, which can trace the moving objects adaptively to environmental changes. With the nature of direct connecting structure, sensory-motor based fusion system can control each motor simultaneously, and the neural networks are used to fuse information from various types of sensors. And also, even if the system receives noisy information from one sensor, the system still robustly works with information from other sensors which compensates the noisy information through sensor fusion. In order to examine the performance, sensory-motor based fusion model is applied to object-tracking four-foot robot equipped with audio and video sensors. The experimental results show that the sensory-motor based fusion system can tract moving objects robustly with simpler control mechanism than model-based control approaches.

Compression Filters Based on Time-Propagated Measurement Fusion (시전달 측정치 융합에 기반한 압축필트)

  • Lee, Hyeong-Geun;Lee, Jang-Gyu
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.9
    • /
    • pp.389-401
    • /
    • 2002
  • To complement the conventional fusion methodologies of state fusion and measurement fusion, a time-propagated measurement fusion methodology is proposed. Various aspects of common process noise are investigated regarding information preservation. Based on time-propagated measurement fusion methodology, four compression filters are derived. The derived compression filters are efficient in asynchronous sensor fusion and fault detection since they maintain correct statistical information. A new batch Kalman recursion is proposed to show the optimality under the time-propagated measurement fusion methodology. A simple simulation result evaluates estimation efficiency and characteristic.