• 제목/요약/키워드: Multi-Feature Fusion

검색결과 87건 처리시간 0.029초

MSFM: Multi-view Semantic Feature Fusion Model for Chinese Named Entity Recognition

  • Liu, Jingxin;Cheng, Jieren;Peng, Xin;Zhao, Zeli;Tang, Xiangyan;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.1833-1848
    • /
    • 2022
  • Named entity recognition (NER) is an important basic task in the field of Natural Language Processing (NLP). Recently deep learning approaches by extracting word segmentation or character features have been proved to be effective for Chinese Named Entity Recognition (CNER). However, since this method of extracting features only focuses on extracting some of the features, it lacks textual information mining from multiple perspectives and dimensions, resulting in the model not being able to fully capture semantic features. To tackle this problem, we propose a novel Multi-view Semantic Feature Fusion Model (MSFM). The proposed model mainly consists of two core components, that is, Multi-view Semantic Feature Fusion Embedding Module (MFEM) and Multi-head Self-Attention Mechanism Module (MSAM). Specifically, the MFEM extracts character features, word boundary features, radical features, and pinyin features of Chinese characters. The acquired font shape, font sound, and font meaning features are fused to enhance the semantic information of Chinese characters with different granularities. Moreover, the MSAM is used to capture the dependencies between characters in a multi-dimensional subspace to better understand the semantic features of the context. Extensive experimental results on four benchmark datasets show that our method improves the overall performance of the CNER model.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • 제14권1호
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • 제8권3호
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Multimodal Biometric Using a Hierarchical Fusion of a Person's Face, Voice, and Online Signature

  • Elmir, Youssef;Elberrichi, Zakaria;Adjoudj, Reda
    • Journal of Information Processing Systems
    • /
    • 제10권4호
    • /
    • pp.555-567
    • /
    • 2014
  • Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

다중 경로 특징점 융합 기반의 의미론적 영상 분할 기법 (Multi-Path Feature Fusion Module for Semantic Segmentation)

  • 박상용;허용석
    • 한국멀티미디어학회논문지
    • /
    • 제24권1호
    • /
    • pp.1-12
    • /
    • 2021
  • In this paper, we present a new architecture for semantic segmentation. Semantic segmentation aims at a pixel-wise classification which is important to fully understand images. Previous semantic segmentation networks use features of multi-layers in the encoder to predict final results. However, they do not contain various receptive fields in the multi-layers features, which easily lead to inaccurate results for boundaries between different classes and small objects. To solve this problem, we propose a multi-path feature fusion module that allows for features of each layers to contain various receptive fields by use of a set of dilated convolutions with different dilatation rates. Various experiments demonstrate that our method outperforms previous methods in terms of mean intersection over unit (mIoU).

A Survey of Fusion Techniques for Multi-spectral Images

  • Achalakul, Tiranee
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -2
    • /
    • pp.1244-1247
    • /
    • 2002
  • This paper discusses various algorithms to the fusion of multi-spectral image. These fusion techniques have a wide variety of applications that range from hospital pathology to battlefield management. Different algorithms in each fusion level, namely data, feature, and decision are compared. The PCT-Based algorithm, which has the characteristic of data compression, is described. The algorithm is experimented on a foliated aerial scene and the fusion result is presented.

  • PDF

Gait Recognition Algorithm Based on Feature Fusion of GEI Dynamic Region and Gabor Wavelets

  • Huang, Jun;Wang, Xiuhui;Wang, Jun
    • Journal of Information Processing Systems
    • /
    • 제14권4호
    • /
    • pp.892-903
    • /
    • 2018
  • The paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.