• Title/Summary/Keyword: image-level fusion

Search Result 84, Processing Time 0.025 seconds

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

IR and SAR Sensor Fusion based Target Detection using BMVT-M (BMVT-M을 이용한 IR 및 SAR 융합기반 지상표적 탐지)

  • Lim, Yunji;Kim, Taehun;Kim, Sungho;Song, WooJin;Kim, Kyung-Tae;Kim, Sohyeon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1017-1026
    • /
    • 2015
  • Infrared (IR) target detection is one of the key technologies in Automatic Target Detection/Recognition (ATD/R) for military applications. However, IR sensors have limitations due to the weather sensitivity and atmospheric effects. In recent years, sensor information fusion study is an active research topic to overcome these limitations. SAR sensor is adopted to sensor fusion, because SAR is robust to various weather conditions. In this paper, a Boolean Map Visual Theory-Morphology (BMVT-M) method is proposed to detect targets in SAR and IR images. Moreover, we suggest the IR and SAR image registration and decision level fusion algorithm. The experimental results using OKTAL-SE synthetic images validate the feasibility of sensor fusion-based target detection.

Image Enhancement Using Adaptive Region-based Histogram Equalization for Multiple Color-Filter Aperture System (다중 컬러필터 조리개 시스템을 위한 적응적 히스토그램 평활화를 이용한 영상 개선)

  • Lee, Eun-Sung;Kang, Won-Seok;Kim, Sang-Jin;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.65-73
    • /
    • 2011
  • In this paper, we present a novel digital multifocusing approach using adaptive region-based histogram equalization for the multiple color-filter aperture (MCA) system with insufficient amount of incoming light. From the image acquired by the MCA system, we can estimate the depth information of objects at different distances by measuring the amount of misalignment among the RGB color planes. The estimated depth information is used to obtain multifocused images together with the process of the region-of-interests (ROIs) classification, registration, and fusion. However, the MCA system results in the low-exposure problem because of the limited size of the apertures. For overcoming this problem, we propose adaptive region-based histogram equalization. Based on the experimental results, the proposed algorithm is proved to be able to obtain in-focused images under the low light level environment.

Sensitivity Analysis of Width Representation for Gait Recognition

  • Hong, Sungjun;Kim, Euntai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.87-94
    • /
    • 2016
  • In this paper, we discuss a gait representation based on the width of silhouette in terms of discriminative power and robustness against the noise in silhouette image for gait recognition. Its sensitivity to the noise in silhouette image are rigorously analyzed using probabilistic noisy silhouette model. In addition, we develop a gait recognition system using width representation and identify subjects using the decision level fusion based on majority voting. Experiments on CASIA gait dataset A and the SOTON gait database demonstrate the recognition performance with respect to the noise level added to the silhouette image.

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

Multi-scale Diffusion-based Salient Object Detection with Background and Objectness Seeds

  • Yang, Sai;Liu, Fan;Chen, Juan;Xiao, Dibo;Zhu, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4976-4994
    • /
    • 2018
  • The diffusion-based salient object detection methods have shown excellent detection results and more efficient computation in recent years. However, the current diffusion-based salient object detection methods still have disadvantage of detecting the object appearing at the image boundaries and different scales. To address the above mentioned issues, this paper proposes a multi-scale diffusion-based salient object detection algorithm with background and objectness seeds. In specific, the image is firstly over-segmented at several scales. Secondly, the background and objectness saliency of each superpixel is then calculated and fused in each scale. Thirdly, manifold ranking method is chosen to propagate the Bayessian fusion of background and objectness saliency to the whole image. Finally, the pixel-level saliency map is constructed by weighted summation of saliency values under different scales. We evaluate our salient object detection algorithm with other 24 state-of-the-art methods on four public benchmark datasets, i.e., ASD, SED1, SED2 and SOD. The results show that the proposed method performs favorably against 24 state-of-the-art salient object detection approaches in term of popular measures of PR curve and F-measure. And the visual comparison results also show that our method highlights the salient objects more effectively.

Image Retrieval Using the Fusion of Texture Features (질감특징들의 융합을 이용한 영상검색)

  • 천영덕;서상용;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.258-267
    • /
    • 2002
  • We present an image retrieval method for improving retrieval performance by effective fusion of entropy features in wavelet region and wavelet moments. In this method, entropy features are sensitive to the local variation of gray level and well extract valley and edges. These features are effectively applied to contend-based image retrieval by well fusing to wavelet moments that represent texture property in multi-resolution. In order to evaluate the performance of the proposed method. We use Corel Draw Photo DB. Experiment results show that the proposed yields 11% better performance for Corel Draw Photo DB over wavelet moments method.

A Novel Approach to Enhance Dual-Energy X-Ray Images Using Region of Interest and Discrete Wavelet Transform

  • Ullah, Burhan;Khan, Aurangzeb;Fahad, Muhammad;Alam, Mahmood;Noor, Allah;Saleem, Umar;Kamran, Muhammad
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.319-331
    • /
    • 2022
  • The capability to examine an X-ray image is so far a challenging task. In this work, we suggest a practical and novel algorithm based on image fusion to inspect the issues such as background noise, blurriness, or sharpness, which curbs the quality of dual-energy X-ray images. The current technology exercised for the examination of bags and baggage is "X-ray"; however, the results of the incumbent technology used show blurred and low contrast level images. This paper aims to improve the quality of X-ray images for a clearer vision of illegitimate or volatile substances. A dataset of 40 images was taken for the experiment, but for clarity, the results of only 13 images have been shown. The results were evaluated using MSE and PSNR metrics, where the average PSNR value of the proposed system compared to single X-ray images was increased by 19.3%, and the MSE value decreased by 17.3%. The results show that the proposed framework will help discern threats and the entire scanning process.

Image segmentation by fusing multiple images obtained under different illumination conditions (조명조건이 다른 다수영상의 융합을 통한 영상의 분할기법)

  • Chun, Yoon-San;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.1 no.2
    • /
    • pp.105-111
    • /
    • 1995
  • This paper proposes a segmentation algorithm using gray-level discontinuity and surface reflectance ratio of input images obtained under different illumination conditions. Each image is divided by a certain number of subregions based on the thresholds. The thresholds are determined using the histogram of fusion image which is obtained by ANDing the multiple input images. The subregions of images are projected on the eigenspace where their bases are the major eigenvectors of image matrix. Points in the eigenspace are classified into two clusters. Images associated with the bigger cluster are fused by revised ANDing to form a combined edge image. Missing edges are detected using surface reflectance ration and chain code. The proposed algorithm obtains more accurate edge information and allows to more efficiently recognize the environment under various illumination conditions.

  • PDF