• 제목/요약/키워드: Semantic Feature Fusion

검색결과 18건 처리시간 0.022초

MSFM: Multi-view Semantic Feature Fusion Model for Chinese Named Entity Recognition

  • Liu, Jingxin;Cheng, Jieren;Peng, Xin;Zhao, Zeli;Tang, Xiangyan;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.1833-1848
    • /
    • 2022
  • Named entity recognition (NER) is an important basic task in the field of Natural Language Processing (NLP). Recently deep learning approaches by extracting word segmentation or character features have been proved to be effective for Chinese Named Entity Recognition (CNER). However, since this method of extracting features only focuses on extracting some of the features, it lacks textual information mining from multiple perspectives and dimensions, resulting in the model not being able to fully capture semantic features. To tackle this problem, we propose a novel Multi-view Semantic Feature Fusion Model (MSFM). The proposed model mainly consists of two core components, that is, Multi-view Semantic Feature Fusion Embedding Module (MFEM) and Multi-head Self-Attention Mechanism Module (MSAM). Specifically, the MFEM extracts character features, word boundary features, radical features, and pinyin features of Chinese characters. The acquired font shape, font sound, and font meaning features are fused to enhance the semantic information of Chinese characters with different granularities. Moreover, the MSAM is used to capture the dependencies between characters in a multi-dimensional subspace to better understand the semantic features of the context. Extensive experimental results on four benchmark datasets show that our method improves the overall performance of the CNER model.

다중 경로 특징점 융합 기반의 의미론적 영상 분할 기법 (Multi-Path Feature Fusion Module for Semantic Segmentation)

  • 박상용;허용석
    • 한국멀티미디어학회논문지
    • /
    • 제24권1호
    • /
    • pp.1-12
    • /
    • 2021
  • In this paper, we present a new architecture for semantic segmentation. Semantic segmentation aims at a pixel-wise classification which is important to fully understand images. Previous semantic segmentation networks use features of multi-layers in the encoder to predict final results. However, they do not contain various receptive fields in the multi-layers features, which easily lead to inaccurate results for boundaries between different classes and small objects. To solve this problem, we propose a multi-path feature fusion module that allows for features of each layers to contain various receptive fields by use of a set of dilated convolutions with different dilatation rates. Various experiments demonstrate that our method outperforms previous methods in terms of mean intersection over unit (mIoU).

특징 융합을 이용한 농작물 다중 분광 이미지의 의미론적 분할 (Semantic Segmentation of Agricultural Crop Multispectral Image Using Feature Fusion)

  • 문준렬;박성준;백중환
    • 한국항행학회논문지
    • /
    • 제28권2호
    • /
    • pp.238-245
    • /
    • 2024
  • 본 논문에서는 농작물 다중 분광 이미지에 대해 특징 융합 기법을 이용하여 의미론적 분할 성능을 향상시키기 위한 프레임워크를 제안한다. 스마트팜 분야에서 연구 중인 딥러닝 기술 중 의미론적 분할 모델 대부분은 RGB(red-green-blue)로 학습을 진행하고 있고 성능을 높이기 위해 모델의 깊이와 복잡성을 증가시키는 데에 집중하고 있다. 본 연구는 기존 방식과 달리 다중 분광과 어텐션 메커니즘을 통해 모델을 최적화하여 설계한다. 제안하는 방식은 RGB 단일 이미지와 함께 UAV (unmanned aerial vehicle)에서 수집된 여러 채널의 특징을 융합하여 특징 추출 성능을 높이고 상호보완적인 특징을 인식하여 학습 효과를 증대시킨다. 특징 융합에 집중할 수 있도록 모델 구조를 개선하고, 작물 이미지에 유리한 채널 및 조합을 실험하여 다른 모델과의 성능을 비교한다. 실험 결과 RGB와 NDVI (normalized difference vegetation index)가 융합된 모델이 다른 채널과의 조합보다 성능이 우수함을 보였다.

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.

센서융합을 통한 시맨틱 지도의 작성 (Sensor Fusion-Based Semantic Map Building)

  • 박중태;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.

DA-Res2Net: a novel Densely connected residual Attention network for image semantic segmentation

  • Zhao, Xiaopin;Liu, Weibin;Xing, Weiwei;Wei, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4426-4442
    • /
    • 2020
  • Since scene segmentation is becoming a hot topic in the field of autonomous driving and medical image analysis, researchers are actively trying new methods to improve segmentation accuracy. At present, the main issues in image semantic segmentation are intra-class inconsistency and inter-class indistinction. From our analysis, the lack of global information as well as macroscopic discrimination on the object are the two main reasons. In this paper, we propose a Densely connected residual Attention network (DA-Res2Net) which consists of a dense residual network and channel attention guidance module to deal with these problems and improve the accuracy of image segmentation. Specifically, in order to make the extracted features equipped with stronger multi-scale characteristics, a densely connected residual network is proposed as a feature extractor. Furthermore, to improve the representativeness of each channel feature, we design a Channel-Attention-Guide module to make the model focusing on the high-level semantic features and low-level location features simultaneously. Experimental results show that the method achieves significant performance on various datasets. Compared to other state-of-the-art methods, the proposed method reaches the mean IOU accuracy of 83.2% on PASCAL VOC 2012 and 79.7% on Cityscapes dataset, respectively.

특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S (Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention)

  • 황범연;이상훈;이승현
    • 한국융합학회논문지
    • /
    • 제12권12호
    • /
    • pp.31-37
    • /
    • 2021
  • 본 논문은 특징 융합과 공간 강조를 적용하여 작고 페색된 객체 검출을 위한 개선된 YOLOv4S를 제안하였다. 기존 YOLOv4S은 경량 네트워크로 깊은 네트워크 대비 특징 추출 능력 부족하다. 제안하는 방법은 먼저 feature fusion으로 서로 다른 크기의 특징맵을 결합하여 의미론적 정보 및 저수준 정보를 개선하였다. 또한, dilated convolution으로 수용 영역을 확장하여 작고 폐색된 객체에 대한 검출 정확도를 향상시켰다. 두 번째로 spatial attention으로 기존 공간 정보 개선하여 객체간 구분되어 폐색된 객체의 검출 정확도를 향상시켰다. 제안하는 방법의 정량적 평가를 위해 PASCAL VOC 및 COCO 데이터세트를 사용하였다. 실험을 통해 제안하는 방법은 기존 YOLOv4S 대비 PASCAL VOC 데이터세트에서 mAP 2.7% 및 COCO 데이터세트에서 mAP 1.8% 향상되었다.

멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합 (Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images)

  • 배혜림;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권12호
    • /
    • pp.505-518
    • /
    • 2023
  • 3차원 포인트 클라우드 의미적 분할은 각 포인트별로 해당 포인트가 속한 물체나 영역의 분류 레이블을 예측함으로써, 포인트 클라우드를 서로 다른 물체들이나 영역들로 나누는 컴퓨터 비전 작업이다. 기존의 3차원 의미적 분할 모델들은 RGB 영상들에서 추출하는 2차원 시각적 특징과 포인트 클라우드에서 추출하는 3차원 기하학적 특징의 특성을 충분히 고려한 특징 융합을 수행하지 못한다는 한계가 있다. 따라서, 본 논문에서는 2차원-3차원 멀티-모달 특징을 이용하는 새로운 3차원 의미적 분할 모델 MMCA-Net을 제안한다. 제안 모델은 중기 융합 전략과 멀티-모달 교차 주의집중 기반의 융합 연산을 적용함으로써, 이질적인 2차원 시각적 특징과 3차원 기하학적 특징을 효과적으로 융합한다. 또한 3차원 기하학적 인코더로 PTv2를 채용함으로써, 포인트들이 비-정규적으로 분포한 입력 포인트 클라우드로부터 맥락정보가 풍부한 3차원 기하학적 특징을 추출해낸다. 본 논문에서는 제안 모델의 성능을 분석하기 위해 벤치마크 데이터 집합인 ScanNetv2을 이용한 다양한 정량 및 정성 실험들을 진행하였다. 성능 척도 mIoU 측면에서 제안 모델은 3차원 기하학적 특징만을 이용하는 PTv2 모델에 비해 9.2%의 성능 향상을, 2차원-3차원 멀티-모달 특징을 사용하는 MVPNet 모델에 비해 12.12%의 성능 향상을 보였다. 이를 통해 본 논문에서 제안한 모델의 효과와 유용성을 입증하였다.

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권10호
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.