• Title/Summary/Keyword: 확장된 합성곱

Search Result 35, Processing Time 0.023 seconds

Multi-band multi-scale DenseNet with dilated convolution for background music separation (배경음악 분리를 위한 확장된 합성곱을 이용한 멀티 밴드 멀티 스케일 DenseNet)

  • Heo, Woon-Haeng;Kim, Hyemi;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.697-702
    • /
    • 2019
  • We propose a multi-band multi-scale DenseNet with dilated convolution that separates background music signals from broadcast content. Dilated convolution can learn the multi-scale context information represented by spectrogram. In computer simulation experiments, the proposed architecture is shown to improve Signal to Distortion Ratio (SDR) by 0.15 dB and 0.27 dB in 0dB and -10 dB Signal to Noise Ratio (SNR) environments, respectively.

CNN 을 이용한 단일영상 고해상도 복원 및 수용영역 확장을 통한 성능 향상

  • Park, Karam;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.76-79
    • /
    • 2019
  • 합성곱 신경망의 성능이 증가하면서 다양한 영상 처리 문제를 해결하기 위해 합성곱 신경망을 적용한 시도들이 증가하고 있다. 고해상도 복원 문제도 그 중 하나였으며, 보다 높은 성능을 얻기 위해 주로 신경망의 깊이를 깊게 하는 시도들이 있었다. 본 논문에서는 고해상도 복원 작업을 위한 합성곱 신경망의 성능 향상을 위해 깊이를 증가시키는 접근법이 아닌 수용영역을 확장시키는 접근법을 시도하였다. 논문에서 제시한 모델은 신경망 내부에 두 개의 브랜치를 두어, 하나의 브랜치는 Dilated Convolution 을 이용해 수용영역을 확장하는데 사용되며, 다른 하나는 이 브랜치를 통해 나온 feature 를 가공하는데 사용된다. 기본 모델은 EDSR 을 사용하였으며, 최종적으로 4.79M 의 파라미터로 평균 32.46dB 의 PSNR 을 보여주었다. 하지만 모델의 구조가 복잡하여 깊이를 늘이는 접근법을 적용하기 어렵다는 한계점이 있다.

  • PDF

Machine Learning Data Extension Way for Confirming Genuine of Trademark Image which is Rotated (회전한 상표 이미지의 진위 결정을 위한 기계 학습 데이터 확장 방법)

  • Gu, Bongen
    • Journal of Platform Technology
    • /
    • v.8 no.1
    • /
    • pp.16-23
    • /
    • 2020
  • For protecting copyright for trademark, convolutional neural network can be used to confirm genuine of trademark image. For this, repeated training one trademark image degrades the performance of machine learning because of overfitting problem. Therefore, this type of machine learning application generates training data in various way. But if genuine trademark image is rotated, this image is classified as not genuine trademark. In this paper, we propose the way for extending training data to confirm genuine of trademark image which is rotated. Our proposed way generates rotated image from genuine trademark image as training data. To show effectiveness of our proposed way, we use CNN machine learning model, and evaluate the accuracy with test image. From evaluation result, our way can be used to generate training data for machine learning application which confirms genuine of rotated trademark image.

  • PDF

Snoring sound detection method using attention-based convolutional bidirectional gated recurrent unit (주의집중 기반의 합성곱 양방향 게이트 순환 유닛을 이용한 코골이 소리 검출 방식)

  • Kim, Min-Soo;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.155-160
    • /
    • 2021
  • This paper proposes an automatic method for detecting snore sound, one of the important symptoms of sleep apnea patients. In the proposed method, sound signals generated during sleep are input to detect a sound generation section, and a spectrogram transformed from the detected sound section is applied to a classifier based on a Convolutional Bidirectional Gated Recurrent Unit (CBGRU) with attention mechanism. The applied attention mechanism improved the snoring sound detection performance by extending the CBGRU model to learn discriminative feature representation for the snoring detection. The experimental results show that the proposed snoring detection method improves the accuracy by approximately 3.1 % ~ 5.5 % than existing method.

Performance Evaluation of AHDR Model using Channel Attention (채널 어텐션을 이용한 AHDR 모델의 성능 평가)

  • Youn, Seok Jun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.335-338
    • /
    • 2021
  • 본 논문에서는 기존 AHDRNet에 channel attention 기법을 적용했을 때 성능에 어떠한 변화가 있는지를 평가하였다. 기존 모델의 병합 망에 존재하는 DRDB(Dilated Residual Dense Block) 사이, 그리고 DRDB 내의 확장된 합성곱 레이어 (dilated convolutional layer) 뒤에 또다른 합성곱 레이어를 추가하는 방식으로 channel attention 기법을 적용하였다. 데이터셋은 Kalantari의 데이터셋을 사용하였으며, PSNR(Peak Signal-to-Noise Ratio)로 비교해본 결과 기존의 AHDRNet의 PSNR은 42.1656이며, 제안된 모델의 PSNR은 42.8135로 더 높아진 것을 확인하였다.

  • PDF

Semi-automatic Expansion for a Chatting Corpus Based on Similarity Measure Using Utterance Embedding by CNN (합성곱 신경망에 의한 발화 임베딩을 사용한 유사도 측정 기반의 채팅 말뭉치 반자동 확장 방법)

  • An, Jaehyun;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.95-100
    • /
    • 2018
  • 채팅 시스템을 잘 만들기 위해서는 양질, 대량의 채팅 말뭉치가 굉장히 중요하지만 구축 시 많은 비용이 발생한다는 어려움이 있었다. 따라서 본 논문에서는 영화 자막, 극대본과 같이 대량의 발화 데이터를 이용하여 채팅 말뭉치를 반자동으로 확장하는 방법을 제안한다. 채팅 말뭉치 확장을 위해 미리 구축된 채팅 말뭉치와 유사도 기법을 이용하여 채팅 유사도를 구하고, 채팅 유사도가 실험을 통해 얻은 임계값보다 크다면 올바른 채팅쌍이라고 판단하였다. 그리고 길이가 매우 짧은 채팅성 발화의 채팅 유사도를 효과적으로 계산하기 위해 본 논문에서 제안하는 것은 형태소 단위 임베딩 벡터와 합성곱 신경망 모델을 이용하여 발화 단위 표상을 생성하는 것이다. 실험 결과 기본 발화 단위 표상 생성 방법인 TF를 이용하는 것보다 정확률, 재현율, F1에서 각각 5.16%p, 6.09%p, 5.73%p 상승하여 61.28%, 53.19%, 56.94%의 성능을 가지는 채팅 말뭉치 반자동 구축 모델을 생성할 수 있었다.

  • PDF

Speech detection from broadcast contents using multi-scale time-dilated convolutional neural networks (다중 스케일 시간 확장 합성곱 신경망을 이용한 방송 콘텐츠에서의 음성 검출)

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.89-96
    • /
    • 2019
  • In this paper, we propose a deep learning architecture that can effectively detect speech segmentation in broadcast contents. We also propose a multi-scale time-dilated layer for learning the temporal changes of feature vectors. We implement several comparison models to verify the performance of proposed model and calculated the frame-by-frame F-score, precision, and recall. Both the proposed model and the comparison model are trained with the same training data, and we train the model using 32 hours of Korean broadcast data which is composed of various genres (drama, news, documentary, and so on). Our proposed model shows the best performance with F-score 91.7% in Korean broadcast data. The British and Spanish broadcast data also show the highest performance with F-score 87.9% and 92.6%. As a result, our proposed model can contribute to the improvement of performance of speech detection by learning the temporal changes of the feature vectors.

Extended High Dimensional Clustering using Iterative Two Dimensional Projection Filtering (반복적 2차원 프로젝션 필터링을 이용한 확장 고차원 클러스터링)

  • Lee, Hye-Myeong;Park, Yeong-Bae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.573-580
    • /
    • 2001
  • The large amounts of high dimensional data contains a significant amount of noises by it own sparsity, which adds difficulties in high dimensional clustering. The CLIP is developed as a clustering algorithm to support characteristics of the high dimensional data. The CLIP is based on the incremental one dimensional projection on each axis and find product sets of the dimensional clusters. These product sets contain not only all high dimensional clusters but also they may contain noises. In this paper, we propose extended CLIP algorithm which refines the product sets that contain cluster. We remove high dimensional noises by applying two dimensional projections iteratively on the already found product sets by CLIP. To evaluate the performance of extended algorithm, we demonstrate its effectiveness through a series of experiments on synthetic data sets.

  • PDF

Artificial neural network for classifying with epilepsy MEG data (뇌전증 환자의 MEG 데이터에 대한 분류를 위한 인공신경망 적용 연구)

  • Yujin Han;Junsik Kim;Jaehee Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.139-155
    • /
    • 2024
  • This study performed a multi-classification task to classify mesial temporal lobe epilepsy with left hippocampal sclerosis patients (left mTLE), mesial temporal lobe epilepsy with right hippocampal sclerosis (right mTLE), and healthy controls (HC) using magnetoencephalography (MEG) data. We applied various artificial neural networks and compared the results. As a result of modeling with convolutional neural networks (CNN), recurrent neural networks (RNN), and graph neural networks (GNN), the average k-fold accuracy was excellent in the order of CNN-based model, GNN-based model, and RNN-based model. The wall time was excellent in the order of RNN-based model, GNN-based model, and CNN-based model. The graph neural network, which shows good figures in accuracy, performance, and time, and has excellent scalability of network data, is the most suitable model for brain research in the future.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.