• 제목/요약/키워드: Intelligence Fusion

검색결과 128건 처리시간 0.03초

대학원 지능융합 클러스터 운영방안 (Management Plan for Intelligence Fusion Cluster of Graduate School)

  • 권오영;김한종;박광범;김태균;박승철;최강선
    • 실천공학교육논문지
    • /
    • 제6권1호
    • /
    • pp.31-36
    • /
    • 2014
  • 지능융합 클러스터는 전기전자정보통신, 컴퓨터공학부, 건축공학부, 디자인공학과로 구성된 학제간 융합 프로그램이다. 지능융합 클러스터의 효과적인 운영방안을 도출하기 위하여 클러스터 소속의 학생과 교수를 대상으로 설문조사를 실시하고, 이 설문을 바탕으로 효과적인 운영안을 도출하였다. 운영안은 교과과정 운영 개선과 연구 활성화를 위한 두 가지 측면으로 제시되었다.

레이더와 전자정보 장비의 정보융합 특성 분석 (An Analysis of Information Fusion Characteristics between Radar and Electronic Intelligence System)

  • 임중수
    • 한국산학기술학회논문지
    • /
    • 제7권5호
    • /
    • pp.847-851
    • /
    • 2006
  • 본 논문에서는 레이더와 전자정보 장비를 사용해서 획득한 각각의 정보를 종합해서 레이더에서 획득한 표적신호와 전자정보에서 획득한 전자파 정보를 융합하는 기술을 제시한다. 레이더와 전자정보 장비를 융합하면 표적을 정확하게 확인할 수 있기 때문에 레이더의 탐지 오차율이 줄어들고 표적에 대한 상세 정보를 확보할 수 있으며, 정보융합 모듈에서 융합한 내용을 종합표시기에서 통합된 정보를 표시할 때 장비의 성능을 향상시킬 수 있으며 표적식별이나 목표물 선정에 사용할 수 있다.

  • PDF

뇌종양 분할을 위한 3D 이중 융합 주의 네트워크 (3D Dual-Fusion Attention Network for Brain Tumor Segmentation)

  • ;;;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.496-498
    • /
    • 2023
  • Brain tumor segmentation problem has challenges in the tumor diversity of location, imbalance, and morphology. Attention mechanisms have recently been used widely to tackle medical segmentation problems efficiently by focusing on essential regions. In contrast, the fusion approaches enhance performance by merging mutual benefits from many models. In this study, we proposed a 3D dual fusion attention network to combine the advantages of fusion approaches and attention mechanisms by residual self-attention and local blocks. Compared to fusion approaches and related works, our proposed method has shown promising results on the BraTS 2018 dataset.

얼굴 감정 인식을 위한 로컬 및 글로벌 어텐션 퓨전 네트워크 (Local and Global Attention Fusion Network For Facial Emotion Recognition)

  • ;;;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.493-495
    • /
    • 2023
  • Deep learning methods and attention mechanisms have been incorporated to improve facial emotion recognition, which has recently attracted much attention. The fusion approaches have improved accuracy by combining various types of information. This research proposes a fusion network with self-attention and local attention mechanisms. It uses a multi-layer perceptron network. The network extracts distinguishing characteristics from facial images using pre-trained models on RAF-DB dataset. We outperform the other fusion methods on RAD-DB dataset with impressive results.

Dialog-based multi-item recommendation using automatic evaluation

  • Euisok Chung;Hyun Woo Kim;Byunghyun Yoo;Ran Han;Jeongmin Yang;Hwa Jeon Song
    • ETRI Journal
    • /
    • 제46권2호
    • /
    • pp.277-289
    • /
    • 2024
  • In this paper, we describe a neural network-based application that recommends multiple items using dialog context input and simultaneously outputs a response sentence. Further, we describe a multi-item recommendation by specifying it as a set of clothing recommendations. For this, a multimodal fusion approach that can process both cloth-related text and images is required. We also examine achieving the requirements of downstream models using a pretrained language model. Moreover, we propose a gate-based multimodal fusion and multiprompt learning based on a pretrained language model. Specifically, we propose an automatic evaluation technique to solve the one-to-many mapping problem of multi-item recommendations. A fashion-domain multimodal dataset based on Koreans is constructed and tested. Various experimental environment settings are verified using an automatic evaluation method. The results show that our proposed method can be used to obtain confidence scores for multi-item recommendation results, which is different from traditional accuracy evaluation.

다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템 (Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm)

  • 염홍기;주종태;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.20-26
    • /
    • 2008
  • 지능형 로봇이나 컴퓨터가 일상생활 속에서 차지하는 비중이 점점 높아짐에 따라 인간과의 상호교류도 점점 중요시되고 있다. 이렇게 지능형 로봇(컴퓨터) - 인간의 상호 교류하는데 있어서 감정 인식 및 표현은 필수라 할 수 있겠다. 본 논문에서는 음성 신호와 얼굴 영상에서 감정적인 특징들을 추출한 후 이것을 Bayesian Learning과 Principal Component Analysis에 적용하여 5가지 감정(평활, 기쁨, 슬픔, 화남, 놀람)으로 패턴을 분류하였다. 그리고 각각 매개체의 단점을 보완하고 인식률을 높이기 위해서 결정 융합 방법과 특징 융합 방법을 적용하여 감정 인식 실험을 하였다. 결정 융합 방법은 각각 인식 시스템을 통해 얻어진 인식 결과 값을 퍼지 소속 함수에 적용하여 감정 인식 실험을 하였으며, 특징 융합 방법은 SFS(Sequential Forward Selection) 특징 선택 방법을 통해 우수한 특징들을 선택한 후 MLP(Multi Layer Perceptron) 기반 신경망(Neural Networks)에 적용하여 감정 인식 실험을 실행하였다. 그리고 인식된 결과 값을 2D 얼굴 형태에 적용하여 감정을 표현하였다.

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제18권3호
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.