• Title/Summary/Keyword: Multiscale fusion

Search Result 12, Processing Time 0.021 seconds

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

Multiscale self-coordination of bidimensional empirical mode decomposition in image fusion

  • An, Feng-Ping;Zhou, Xian-Wei;Lin, Da-Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1441-1456
    • /
    • 2015
  • The bidimensional empirical mode decomposition (BEMD) algorithm with high adaptability is more suitable to process multiple image fusion than traditional image fusion. However, the advantages of this algorithm are limited by the end effects problem, multiscale integration problem and number difference of intrinsic mode functions in multiple images decomposition. This study proposes the multiscale self-coordination BEMD algorithm to solve this problem. This algorithm outside extending the feather information with the support vector machine which has a high degree of generalization, then it also overcomes the BEMD end effects problem with conventional mirror extension methods of data processing,. The coordination of the extreme value point of the source image helps solve the problem of multiscale information fusion. Results show that the proposed method is better than the wavelet and NSCT method in retaining the characteristics of the source image information and the details of the mutation information inherited from the source image and in significantly improving the signal-to-noise ratio.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

Temperature thread multiscale finite element simulation of selective laser melting for the evaluation of process

  • Lee, Kang-Hyun;Yun, Gun Jin
    • Advances in aircraft and spacecraft science
    • /
    • v.8 no.1
    • /
    • pp.31-51
    • /
    • 2021
  • Selective laser melting (SLM), one of the most widely used powder bed fusion (PBF) additive manufacturing (AM) technology, enables the fabrication of customized metallic parts with complex geometry by layer-by-layer fashion. However, SLM inherently poses several problems such as the discontinuities in the molten track and the steep temperature gradient resulting in a high degree of residual stress. To avoid such defects, thisstudy proposes a temperature thread multiscale model of SLM for the evaluation of the process at different scales. In microscale melt pool analysis, the laser beam parameters were evaluated based on the predicted melt pool morphology to check for lack-of-fusion or keyhole defects. The analysis results at microscale were then used to build an equivalent body heat flux model to obtain the residual stress distribution and the part distortions at the macroscale (part level). To identify the source of uneven heat dissipation, a liquid lifetime contour at macroscale was investigated. The predicted distortion was also experimentally validated showing a good agreement with the experimental measurement.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

Face Recognition using Contourlet Transform and PCA (Contourlet 변환 및 PCA에 의한 얼굴인식)

  • Song, Chang-Kyu;Kwon, Seok-Young;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.403-409
    • /
    • 2007
  • Contourlet transform is an extention of the wavelet transform in two dimensions using the multiscale and directional fillet banks. The contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. In this paper, we propose a face recognition system based on fusion methods using contourlet transform and PCA. After decomposing a face image into directional subband images by contourlet, features are obtained in each subband by PCA. Finally, face recognition is performed by fusion technique that effectively combines similarities calculated respectively In each local subband. To show the effectiveness of the proposed method, we performed experiments for ORL and CBNU dataset, and then we obtained better recognition performance in comparison with the results produced by conventional methods.

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

  • Zhou, Yanyan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.411-425
    • /
    • 2021
  • In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.

Thermo-mechanical damage of tungsten surfaces exposed to rapid transient plasma heat loads

  • Crosby, Tamer;Ghoniem, Nasr M.
    • Interaction and multiscale mechanics
    • /
    • v.4 no.3
    • /
    • pp.207-217
    • /
    • 2011
  • International efforts have focused recently on the development of tungsten surfaces that can intercept energetic ionized and neutral atoms, and heat fluxes in the divertor region of magnetic fusion confinement devices. The combination of transient heating and local swelling due to implanted helium and hydrogen atoms has been experimentally shown to lead to severe surface and sub-surface damage. We present here a computational model to determine the relationship between the thermo-mechanical loading conditions, and the onset of damage and failure of tungsten surfaces. The model is based on thermo-elasticity, coupled with a grain boundary damage mode that includes contact cohesive elements for grain boundary sliding and fracture. This mechanics model is also coupled with a transient heat conduction model for temperature distributions following rapid thermal pulses. Results of the computational model are compared to experiments on tungsten bombarded with energetic helium and deuterium particle fluxes.