• Title/Summary/Keyword: Decision-level fusion

Search Result 38, Processing Time 0.02 seconds

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

Bayesian Fusion of Confidence Measures for Confidence Scoring (베이시안 신뢰도 융합을 이용한 신뢰도 측정)

  • 김태윤;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.410-419
    • /
    • 2004
  • In this paper. we propose a method of confidence measure fusion under Bayesian framework for speech recognition. Centralized and distributed schemes are considered for confidence measure fusion. Centralized fusion is feature level fusion which combines the values of individual confidence scores and makes a final decision. In contrast. distributed fusion is decision level fusion which combines the individual decision makings made by each individual confidence measuring method. Optimal Bayesian fusion rules for centralized and distributed cases are presented. In isolated word Out-of-Vocabulary (OOV) rejection experiments. centralized Bayesian fusion shows over 13% relative equal error rate (EER) reduction compared with the individual confidence measure methods. In contrast. the distributed Bayesian fusion shows no significant performance increase.

Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication (다중 바이오 인증에서 특징 융합과 결정 융합의 결합)

  • Lee, Kyung-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • We present a new multimodal biometric authentication method, which performs both feature-level fusion and decision-level fusion. After generating support vector machines for new features made by integrating face and voice features, the final decision for authentication is made by integrating decisions of face SVM classifier, voice SVM classifier and integrated features SVM clssifier. We justify our proposal by comparing our method with traditional one by experiments with XM2VTS multimodal database. The experiments show that our multilevel fusion algorithm gives higher recognition rate than the existing schemes.

A Development of Wireless Sensor Networks for Collaborative Sensor Fusion Based Speaker Gender Classification (협동 센서 융합 기반 화자 성별 분류를 위한 무선 센서네트워크 개발)

  • Kwon, Ho-Min
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.2
    • /
    • pp.113-118
    • /
    • 2011
  • In this paper, we develop a speaker gender classification technique using collaborative sensor fusion for use in a wireless sensor network. The distributed sensor nodes remove the unwanted input data using the BER(Band Energy Ration) based voice activity detection, process only the relevant data, and transmit the hard labeled decisions to the fusion center where a global decision fusion is carried out. This takes advantages of power consumption and network resource management. The Bayesian sensor fusion and the global weighting decision fusion methods are proposed to achieve the gender classification. As the number of the sensor nodes varies, the Bayesian sensor fusion yields the best classification accuracy using the optimal operating points of the ROC(Receiver Operating Characteristic) curves_ For the weights used in the global decision fusion, the BER and MCL(Mutual Confidence Level) are employed to effectively combined at the fusion center. The simulation results show that as the number of the sensor nodes increases, the classification accuracy was even more improved in the low SNR(Signal to Noise Ration) condition.

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

A Survey of Fusion Techniques for Multi-spectral Images

  • Achalakul, Tiranee
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1244-1247
    • /
    • 2002
  • This paper discusses various algorithms to the fusion of multi-spectral image. These fusion techniques have a wide variety of applications that range from hospital pathology to battlefield management. Different algorithms in each fusion level, namely data, feature, and decision are compared. The PCT-Based algorithm, which has the characteristic of data compression, is described. The algorithm is experimented on a foliated aerial scene and the fusion result is presented.

  • PDF

Decision Making Algorithm for Adult Spinal Deformity Surgery

  • Kim, Yongjung J.;Hyun, Seung-Jae;Cheh, Gene;Cho, Samuel K.;Rhim, Seung-Chul
    • Journal of Korean Neurosurgical Society
    • /
    • v.59 no.4
    • /
    • pp.327-333
    • /
    • 2016
  • Adult spinal deformity (ASD) is one of the most challenging spinal disorders associated with broad range of clinical and radiological presentation. Correct selection of fusion levels in surgical planning for the management of adult spinal deformity is a complex task. Several classification systems and algorithms exist to assist surgeons in determining the appropriate levels to be instrumented. In this study, we describe our new simple decision making algorithm and selection of fusion level for ASD surgery in terms of adult idiopathic idiopathic scoliosis vs. degenerative scoliosis.

A Study for Improved Human Action Recognition using Multi-classifiers (비디오 행동 인식을 위하여 다중 판별 결과 융합을 통한 성능 개선에 관한 연구)

  • Kim, Semin;Ro, Yong Man
    • Journal of Broadcast Engineering
    • /
    • v.19 no.2
    • /
    • pp.166-173
    • /
    • 2014
  • Recently, human action recognition have been developed for various broadcasting and video process. Since a video can consist of various scenes, keypoint approaches have been more attracted than template based methods for real application. Keypoint approahces tried to find regions having motion in video, and made 3-dimensional patches. Then, descriptors using histograms were computed from the patches, and a classifier based on machine learning method was applied to detect actions in video. However, a single classifier was difficult to handle various human actions. In order to improve this problem, approaches using multi classifiers were used to detect and to recognize objects. Thus, we propose a new human action recognition using decision-level fusion with support vector machine and sparse representation. The proposed method extracted descriptors based on keypoint approach from a video, and acquired results from each classifier for human action recognition. Then, we applied weights which were acquired by training stage to fuse each results from two classifiers. The experiment results in this paper show better result than a previous fusion method.

Decision Level Fusion of Multifrequency Polarimetric SAR Data Using Target Decomposition based Features and a Probabilistic Ratio Model (타겟 분해 기반 특징과 확률비 모델을 이용한 다중 주파수 편광 SAR 자료의 결정 수준 융합)

  • Chi, Kwang-Hoon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.89-101
    • /
    • 2007
  • This paper investigates the effects of the fusion of multifrequency (C and L bands) polarimetric SAR data in land-cover classification. NASA JPL AIRSAR C and L bands data were used to supervised classification in an agricultural area to simulate the integration of ALOS PALSAR and Radarsat-2 SAR data to be available. Several scattering features derived from target decomposition based on eigen value/vector analysis were used as input for a support vector machines classifier and then the posteriori probabilities for each frequency SAR data were integrated by applying a probabilistic ratio model as a decision level fusion methodology. From the case study results, L band data had the proper amount of penetration power and showed better classification accuracy improvement (about 22%) over C band data which did not have enough penetration. When all frequency data were fused for the classification, a significant improvement of about 10% in overall classification accuracy was achieved thanks to an increase of discrimination capability for each class, compared with the case of L band Shh data.