• 제목/요약/키워드: Fusion Learning

검색결과 319건 처리시간 0.025초

다수 분류기를 이용한 메타레벨 데이터마이닝 (Metalevel Data Mining through Multiple Classifier Fusion)

  • 김형관;신성우
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 1999년도 가을 학술발표논문집 Vol.26 No.2 (2)
    • /
    • pp.551-553
    • /
    • 1999
  • This paper explores the utility of a new classifier fusion approach to discrimination. Multiple classifier fusion, a popular approach in the field of pattern recognition, uses estimates of each individual classifier's local accuracy on training data sets. In this paper we investigate the effectiveness of fusion methods compared to individual algorithms, including the artificial neural network and k-nearest neighbor techniques. Moreover, we propose an efficient meta-classifier architecture based on an approximation of the posterior Bayes probabilities for learning the oracle.

  • PDF

적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지 (Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion)

  • 조은지;이동천
    • 한국측량학회지
    • /
    • 제38권6호
    • /
    • pp.635-644
    • /
    • 2020
  • 딥러닝(DL)을 이용한 객체인식, 탐지 및 분할하는 연구는 여러 분야에서 활용되고 있으며, 주로 영상을 DL 모델의 학습 데이터로 사용하고 있지만, 본 논문은 영상뿐 아니라 공간정보 특성을 포함하는 다양한 학습 데이터(multimodal training data)를 향상된 영역기반 합성곱 신경망(R-CNN)인 Detectron2 모델 학습에 사용하여 객체를 분할하고 건물을 탐지하는 것이 목적이다. 이를 위하여 적외선 항공영상과 라이다 데이터의 내재된 객체의 윤곽 및 통계적 질감정보인 Haralick feature와 같은 여러 특성을 추출하였다. DL 모델의 학습 성능은 데이터의 수량과 특성뿐 아니라 융합방법에 의해 좌우된다. 초기융합(early fusion)과 후기융합(late fusion)의 혼용방식인 하이브리드 융합(hybrid fusion)을 적용한 결과 33%의 건물을 추가적으로 탐지 할 수 있다. 이와 같은 실험 결과는 서로 다른 특성 데이터의 복합적 학습과 융합에 의한 상호보완적 효과를 입증하였다고 판단된다.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

  • Zhou, Yanyan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.411-425
    • /
    • 2021
  • In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.

Multi-parametric MRIs based assessment of Hepatocellular Carcinoma Differentiation with Multi-scale ResNet

  • Jia, Xibin;Xiao, Yujie;Yang, Dawei;Yang, Zhenghan;Lu, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권10호
    • /
    • pp.5179-5196
    • /
    • 2019
  • To explore an effective non-invasion medical imaging diagnostics approach for hepatocellular carcinoma (HCC), we propose a method based on adopting the multiple technologies with the multi-parametric data fusion, transfer learning, and multi-scale deep feature extraction. Firstly, to make full use of complementary and enhancing the contribution of different modalities viz. multi-parametric MRI images in the lesion diagnosis, we propose a data-level fusion strategy. Secondly, based on the fusion data as the input, the multi-scale residual neural network with SPP (Spatial Pyramid Pooling) is utilized for the discriminative feature representation learning. Thirdly, to mitigate the impact of the lack of training samples, we do the pre-training of the proposed multi-scale residual neural network model on the natural image dataset and the fine-tuning with the chosen multi-parametric MRI images as complementary data. The comparative experiment results on the dataset from the clinical cases show that our proposed approach by employing the multiple strategies achieves the highest accuracy of 0.847±0.023 in the classification problem on the HCC differentiation. In the problem of discriminating the HCC lesion from the non-tumor area, we achieve a good performance with accuracy, sensitivity, specificity and AUC (area under the ROC curve) being 0.981±0.002, 0.981±0.002, 0.991±0.007 and 0.999±0.0008, respectively.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제36권1호
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

12각형 기반의 Q-learning과 SVM을 이용한 군집로봇의 목표물 추적 알고리즘 (Object tracking algorithm of Swarm Robot System for using SVM and Dodecagon based Q-learning)

  • 서상욱;양현창;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제18권3호
    • /
    • pp.291-296
    • /
    • 2008
  • 본 논문에서는 군집로봇시스템에서 목표물 추적을 위하여 SVM을 이용한 12각형 기반의 Q-learning 알고리즘을 제안한다. 제안한 알고리즘의 유효성을 보이기 위해 본 논문에서는 여러 대의 로봇과 장애물 그리고 하나의 목표물로 정하고, 각각의 로봇이 숨겨진 목표물을 찾아내는 실험을 가정하여 무작위, DBAM과 AMAB의 융합 모델, 마지막으로는 본 논문에서 제안한 SVM과 12각형 기반의 Q-learning 알고리즘을 이용하여 실험을 수행하고, 이 3가지 방법을 비교하여 본 논문의 유효성을 검증하였다.

Temporal Fusion Transformers와 심층 학습 방법을 사용한 다층 수평 시계열 데이터 분석 (Temporal Fusion Transformers and Deep Learning Methods for Multi-Horizon Time Series Forecasting)

  • 김인경;김대희;이재구
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제11권2호
    • /
    • pp.81-86
    • /
    • 2022
  • 시계열 데이터는 주식, IoT, 공장 자동화와 같은 다양한 실생활에서 수집되고 활용되고 있으며, 정확한 시계열 예측은 해당 분야에서 운영 효율성을 높일 수 있어서 전통적으로 중요한 연구 주제이다. 전반적인 시계열 데이터의 향상된 특징을 추출할 수 있는 대표적인 시계열 데이터 분석 방법인 다층 수평 예측은 최근 부가적 정보를 포함하는 시계열 데이터에 내재한 이질성(heterogeneity)까지 포괄적으로 분석에 활용하여 향상된 시계열 예측한다. 하지만 대부분의 심층 학습 기반 시계열 분석 모델들은 시계열 데이터의 이질성을 반영하지 못했다. 따라서 우리는 잘 알려진 temporal fusion transformers 방법을 사용하여 실생활과 밀접한 실제 데이터를 이질성을 고려한 다층 수평 예측에 적용하였다. 결과적으로 주식, 미세먼지, 전기 소비량과 같은 실생활 시계열 데이터에 적용한 방법이 기존 예측 모델보다 향상된 정확도를 가짐을 확인할 수 있었다.

관성/고도 센서 융합을 위한 기계학습 기반 필터 파라미터 추정 (Machine Learning-Based Filter Parameter Estimation for Inertial/Altitude Sensor Fusion)

  • Hyeon-su Hwang;Hyo-jung Kim;Hak-tae Lee;Jong-han Kim
    • 한국항행학회논문지
    • /
    • 제27권6호
    • /
    • pp.884-887
    • /
    • 2023
  • Recently, research has been actively conducted to overcome the limitations of high-priced single sensors and reduce costs through the convergence of low-cost multi-variable sensors. This paper estimates state variables through asynchronous Kalman filters constructed using CVXPY and uses Cvxpylayers to compare and learn state variables estimated from CVXPY with true value data to estimate filter parameters of low-cost sensors fusion.

Learning Curve and Complications Experience of Oblique Lateral Interbody Fusion : A Single-Center 143 Consecutive Cases

  • Oh, Bu Kwang;Son, Dong Wuk;Lee, Su Hun;Lee, Jun Seok;Sung, Soon Ki;Lee, Sang Weon;Song, Geun Sung
    • Journal of Korean Neurosurgical Society
    • /
    • 제64권3호
    • /
    • pp.447-459
    • /
    • 2021
  • Objective : Oblique lateral interbody fusion (OLIF) is becoming the preferred treatment for degenerative lumbar diseases. As beginners, we performed 143 surgeries over 19 months. In these consecutive cases, we analyzed the learning curve and reviewed the complications in our experience. Methods : This was a retrospective study; however, complications that were well known in the previous literature were strictly recorded prospectively. We followed up the changes in estimated blood loss (EBL), operation time, and transient psoas paresis according to case accumulation to analyze the learning curve. Results : Complication-free patients accounted for 43.6% (12.9%, early stage 70 patients and 74.3%, late stage 70 patients). The most common complication was transient psoas paresis (n=52). Most of these complications occurred in the early stages of learning. C-reactive protein normalization was delayed in seven patients (4.89%). The operation time showed a decreasing trend with the cases; however, EBL did not show any significant change. Notable operation-induced complications were cage malposition, vertebral body fracture, injury to the ureter, and injury to the lumbar vein. Conclusion : According to the learning curve, the operation time and psoas paresis decreased. It is important to select an appropriately sized cage along with clear dissection of the anterior border of the psoas muscle to prevent OLIF-specific complications.