• 제목/요약/키워드: Multi focus image fusion

검색결과 14건 처리시간 0.027초

Multi-Focus Image Fusion Using Transformation Techniques: A Comparative Analysis

  • Ali Alferaidi
    • International Journal of Computer Science & Network Security
    • /
    • 제23권4호
    • /
    • pp.39-47
    • /
    • 2023
  • This study compares various transformation techniques for multifocus image fusion. Multi-focus image fusion is a procedure of merging multiple images captured at unalike focus distances to produce a single composite image with improved sharpness and clarity. In this research, the purpose is to compare different popular frequency domain approaches for multi-focus image fusion, such as Discrete Wavelet Transforms (DWT), Stationary Wavelet Transforms (SWT), DCT-based Laplacian Pyramid (DCT-LP), Discrete Cosine Harmonic Wavelet Transform (DC-HWT), and Dual-Tree Complex Wavelet Transform (DT-CWT). The objective is to increase the understanding of these transformation techniques and how they can be utilized in conjunction with one another. The analysis will evaluate the 10 most crucial parameters and highlight the unique features of each method. The results will help determine which transformation technique is the best for multi-focus image fusion applications. Based on the visual and statistical analysis, it is suggested that the DCT-LP is the most appropriate technique, but the results also provide valuable insights into choosing the right approach.

FUSESHARP: A MULTI-IMAGE FOCUS FUSION METHOD USING DISCRETE WAVELET TRANSFORM AND UNSHARP MASKING

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of applied mathematics & informatics
    • /
    • 제41권5호
    • /
    • pp.1115-1128
    • /
    • 2023
  • In this paper, a novel hybrid method for multi-focus image fusion is proposed. The method combines the advantages of wavelet transform-based methods and focus-measure-based methods to achieve an improved fusion result. The input images are first decomposed into different frequency sub-bands using the discrete wavelet transform (DWT). The focus measure of each sub-band is then calculated using the Laplacian of Gaussian (LoG) operator, and the sub-band with the highest focus measure is selected as the focused sub-band. The focused sub-band is sharpened using an unsharp masking filter to preserve the details in the focused part of the image.Finally, the sharpened focused sub-bands from all input images are fused using the maximum intensity fusion method to preserve the important information from all focus images. The proposed method has been evaluated using standard multi focus image fusion datasets and has shown promising results compared to existing methods.

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Research on the Multi-Focus Image Fusion Method Based on the Lifting Stationary Wavelet Transform

  • Hu, Kaiqun;Feng, Xin
    • Journal of Information Processing Systems
    • /
    • 제14권5호
    • /
    • pp.1293-1300
    • /
    • 2018
  • For the disadvantages of multi-scale geometric analysis methods such as loss of definition and complex selection of rules in image fusion, an improved multi-focus image fusion method is proposed. First, the initial fused image is quickly obtained based on the lifting stationary wavelet transform, and a simple normalized cut is performed on the initial fused image to obtain different segmented regions. Then, the original image is subjected to NSCT transformation and the absolute value of the high frequency component coefficient in each segmented region is calculated. At last, the region with the largest absolute value is selected as the postfusion region, and the fused multi-focus image is obtained by traversing each segment region. Numerical experiments show that the proposed algorithm can not only simplify the selection of fusion rules, but also overcome loss of definition and has validity.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

PATN: Polarized Attention based Transformer Network for Multi-focus image fusion

  • Pan Wu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권4호
    • /
    • pp.1234-1257
    • /
    • 2023
  • In this paper, we propose a framework for multi-focus image fusion called PATN. In our approach, by aggregating deep features extracted based on the U-type Transformer mechanism and shallow features extracted using the PSA module, we make PATN feed both long-range image texture information and focus on local detail information of the image. Meanwhile, the edge-preserving information value of the fused image is enhanced using a dense residual block containing the Sobel gradient operator, and three loss functions are introduced to retain more source image texture information. PATN is compared with 17 more advanced MFIF methods on three datasets to verify the effectiveness and robustness of PATN.

A Novel Automatic Block-based Multi-focus Image Fusion via Genetic Algorithm

  • Yang, Yong;Zheng, Wenjuan;Huang, Shuying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1671-1689
    • /
    • 2013
  • The key issue of block-based multi-focus image fusion is to determine the size of the sub-block because different sizes of the sub-block will lead to different fusion effects. To solve this problem, this paper presents a novel genetic algorithm (GA) based multi-focus image fusion method, in which the block size can be automatically found. In our method, the Sum-modified-Laplacian (SML) is selected as an evaluation criterion to measure the clarity of the image sub-block, and the edge information retention is employed to calculate the fitness of each individual. Then, through the selection, crossover and mutation procedures of the GA, we can obtain the optimal solution for the sub-block, which is finally used to fuse the images. Experimental results show that the proposed method outperforms the traditional methods, including the average, gradient pyramid, discrete wavelet transform (DWT), shift invariant DWT (SIDWT) and two existing GA-based methods in terms of both the visual subjective evaluation and the objective evaluation.

재능 유전인자를 갖는 네스티드 유전자 알고리듬을 이용한 새로운 다중 초점 이미지 융합 기법 (A Novel Multi-focus Image Fusion Scheme using Nested Genetic Algorithms with "Gifted Genes")

  • 박대철;론넬 아톨레
    • 한국인터넷방송통신학회논문지
    • /
    • 제9권1호
    • /
    • pp.75-87
    • /
    • 2009
  • 본 논문에서 이미지 선명도 함수의 최적화에 의해 융합 법칙이 유도되는 새로운 이미지 융합 접근법을 제안한다. 선명도 함수에 비교하여 소스 이미지로부터 최적 블록을 통계적으로 선택하기 위하여 유전자 알고리듬이 사용되었다. 변이 연산에 의해 만들어진 유전인자들의 포격을 통해서 찾아진 재능 유전 인자를 갖는 새로운 네스티드 유전자 알고리듬을 설계하였고 구현하였다. 알고리듬의 수렴은 해석적으로, 실험적으로 그리고 통계적으로 3개의 테스트 함수를 사용하여 표준 GA와 비교하였다. 결과의 GA는 변수와 집단 크기에 불변이며, 최소 20 개체이면 시험에 충분하다는 것을 알 수 있었다. 융합 응용에서 모집단내의 각 개체는 입력 블록을 나타내는 유한한 이산 값을 갖는 개체이다. 이미지 융합 실험에 제안한 기법의 성능은 출력 품질 척도로 상호 정보량(MI)으로 특징지워진다. 제안한 방법은 C=2 입력 이미지에 대해 테스트되었다. 제안한 방법의 실험 결과는 현재의 다중 초점 이미지 융합 기법에 대한 실제적이고 매력적인 대안이 됨을 보여준다.

  • PDF

방향성 다해상도 변환을 사용한 새로운 다중초점 이미지 융합 기법 (A Novel Multi-focus Image Fusion Technique Using Directional Multiresolution Transform)

  • 박대철;론넬 아톨레
    • 한국인터넷방송통신학회논문지
    • /
    • 제9권4호
    • /
    • pp.59-68
    • /
    • 2009
  • 본 논문은 최근 소개된 curvelet 변환 구성을 사용하여 하잇브리드 다초점 이미지 융합 기법을 다룬다. 하잇브리화는 MS 융합 규칙을 새로운 "복제" 방법과 결합시킴으로써 얻어진다. 제안된 기법은 MS 규칙을 사용하여 각 분해 레벨 이미지의 스펙트럼내에 m개의 가장 두드러진 항들만을 융합시킨다. 이 기법은 이미지의 어떠한 스케일과 방향, 이동에서 변환 집합의 MSC에 충실하여 m-항 융합으로 합성이 이루어진다. 제안한 방법을 평가하기 위하여 Xydeas 와 Petrovic이 제안한 경계선에 민감한 객관적 품질 척도를 적용하였다. 실험 결과는 제안한 기법이 잉여, 쉬프트-불변 Dual-Tree 복소수 웨이블릿 변환에 대한 대안으로서의 가능성을 보여주었다. 특히, 50%의 m-항 융합은 어떤 시각적인 품질 저하를 갖지 않는 결과를 주는 것이 확인되었다.

  • PDF

Parzen 윈도우 추정에 기반한 다중 초점 이미지 융합 기법 (Multi-focus Image Fusion Technique Based on Parzen-windows Estimates)

  • ;박대철
    • 한국인터넷방송통신학회논문지
    • /
    • 제8권4호
    • /
    • pp.75-88
    • /
    • 2008
  • 본 논문은 입력 이미지 블록의 클래스 조건부 확률 밀도 함수의 커널 추정에 기반한 공간 영역에서의 다중초점 이미지 융합 기법을 제안한다. 이미지 융합 문제를 시험 패턴으로부터 추정된 유사 밀도 함수에 의해 사후 클래스 확률, P($w_{i}{\mid}B_{ikl}$),을 계산하는 분류 임무로 접근하였다. C개의 입력 이미지 $I_{i}$에 대하여 제안한 방법은 i 클래스 $w_{i}$를 정의하고 베이즈 결정 원리에 기초하여 판별 함수를 최대화하는 PxQ 블록 $B_{ikl}$의 집합에 의해 표현되는 결정 지도로 부터 융합 이미지 Z(k,l)를 형성한다. 출력 화질의 척도로서 RMSE 와 상호 정보량인 MI를 사용하여 제안한 기법의 성능이 평가되었다. 커널 함수의 폭 ${\sigma}$ 도 변화시키고, 다른 종류의 커널과 블록 크기를 변화시켜 가며 성능평가를 수행하였다. 제안한 가법은 C=2 와 C=3에 대하여 시험하였고 시험 결과는 좋은 성능을 보였다.

  • PDF