• Title/Summary/Keyword: 콘볼루션 신경망

Search Result 15, Processing Time 0.033 seconds

A study on the waveform-based end-to-end deep convolutional neural network for weakly supervised sound event detection (약지도 음향 이벤트 검출을 위한 파형 기반의 종단간 심층 콘볼루션 신경망에 대한 연구)

  • Lee, Seokjin;Kim, Minhan;Jeong, Youngho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.24-31
    • /
    • 2020
  • In this paper, the deep convolutional neural network for sound event detection is studied. Especially, the end-to-end neural network, which generates the detection results from the input audio waveform, is studied for weakly supervised problem that includes weakly-labeled and unlabeled dataset. The proposed system is based on the network structure that consists of deeply-stacked 1-dimensional convolutional neural networks, and enhanced by the skip connection and gating mechanism. Additionally, the proposed system is enhanced by the sound event detection and post processings, and the training step using the mean-teacher model is added to deal with the weakly supervised data. The proposed system was evaluated by the Detection and Classification of Acoustic Scenes and Events (DCASE) 2019 Task 4 dataset, and the result shows that the proposed system has F1-scores of 54 % (segment-based) and 32 % (event-based).

Comparison of environmental sound classification performance of convolutional neural networks according to audio preprocessing methods (오디오 전처리 방법에 따른 콘벌루션 신경망의 환경음 분류 성능 비교)

  • Oh, Wongeun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.3
    • /
    • pp.143-149
    • /
    • 2020
  • This paper presents the effect of the feature extraction methods used in the audio preprocessing on the classification performance of the Convolutional Neural Networks (CNN). We extract mel spectrogram, log mel spectrogram, Mel Frequency Cepstral Coefficient (MFCC), and delta MFCC from the UrbanSound8K dataset, which is widely used in environmental sound classification studies. Then we scale the data to 3 distributions. Using the data, we test four CNNs, VGG16, and MobileNetV2 networks for performance assessment according to the audio features and scaling. The highest recognition rate is achieved when using the unscaled log mel spectrum as the audio features. Although this result is not appropriate for all audio recognition problems but is useful for classifying the environmental sounds included in the Urbansound8K.

Abusive Sentence Detection using Deep Learning in Online Game (딥러닝를 사용한 온라인 게임에서의 욕설 탐지)

  • Park, Sunghee;Kim, Huy Kang;Woo, Jiyoung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.13-14
    • /
    • 2019
  • 욕설은 게임 내 가장 큰 불쾌 요소 중 하나이다. 지금까지 게임 사용자들의 욕설을 방지하기 위해서 금칙어를 기반으로 필터링 해왔으나, 한국어 특성상 단어를 변형하거나 중간에 숫자를 넣는 등 우회할 방법이 다양하기 때문에 효과적이지 않다. 따라서 본 논문에서는 실제 온라인 게임 'Archeage'에서 수집된 채팅 데이터를 기반으로 딥러닝 기법 중 하나인 콘볼루션 신경망을 사용하여 욕설을 탐지하는 모델을 구축하였다. 한글의 자음, 모음을 분리하여 실험하였을 때, 87%라는 정확도를 얻었다. 한 글자씩 분리한 경우, 조금 더 좋은 정확도를 얻었으나, 사전의 수가 자소를 분리한 경우보다 10배 이상 늘어난 것을 고려해보면 자소를 분리한 것이 더 효율적이다.

  • PDF

Accelerating Deep learning based Super resolution algorithm using GPU (GPU 를 이용한 콘볼루션 뉴럴 네트워크 기반 초해상화 설계 및 구현)

  • Ki, Sehwan;Choi, Jaeseok;Kim, Sooye;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.190-191
    • /
    • 2017
  • 본 논문에서는 딥 콘볼루션 신경망 구조를 사용하여 학습된 초해상화 알고리즘을 GPU 프로그래밍을 통해 실시간 동작이 가능하도록 하는 방법을 제시하였다. 딥 러닝이 많이 대중화 되면서 많은 영상처리 알고리즘이 딥러닝을 기반으로 연구가 되었다. 하지만 계산 량이 많이 필요로 하는 딥 러닝 기반 알고리즘은 UHD 이상의 고해상도 영상처리에는 실시간 처리가 어려웠다. 이런 문제를 해결하기 위해서 고속 병렬 처리가 가능한 GPU 를 사용해서 2K 입력영상을 4K 출력 영상으로 확대하는 딥 초해상화 알고리즘을 30 fps 이상의 처리 속도로 동작이 가능하도록 구현을 하였다.

  • PDF

A Tensor Space Model based Deep Neural Network for Automated Text Classification (자동문서분류를 위한 텐서공간모델 기반 심층 신경망)

  • Lim, Pu-reum;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.3-13
    • /
    • 2018
  • Text classification is one of the text mining technologies that classifies a given textual document into its appropriate categories and is used in various fields such as spam email detection, news classification, question answering, emotional analysis, and chat bot. In general, the text classification system utilizes machine learning algorithms, and among a number of algorithms, naïve Bayes and support vector machine, which are suitable for text data, are known to have reasonable performance. Recently, with the development of deep learning technology, several researches on applying deep neural networks such as recurrent neural networks (RNN) and convolutional neural networks (CNN) have been introduced to improve the performance of text classification system. However, the current text classification techniques have not yet reached the perfect level of text classification. This paper focuses on the fact that the text data is expressed as a vector only with the word dimensions, which impairs the semantic information inherent in the text, and proposes a neural network architecture based upon the semantic tensor space model.

Efficient Convolutional Neural Network with low Complexity (저연산량의 효율적인 콘볼루션 신경망)

  • Lee, Chanho;Lee, Joongkyung;Ho, Cong Ahn
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.685-690
    • /
    • 2020
  • We propose an efficient convolutional neural network with much lower computational complexity and higher accuracy based on MobileNet V2 for mobile or edge devices. The proposed network consists of bottleneck layers with larger expansion factors and adjusted number of channels, and excludes a few layers, and therefore, the computational complexity is reduced by half. The performance the proposed network is verified by measuring the accuracy and execution times by CPU and GPU using ImageNet100 dataset. In addition, the execution time on GPU depends on the CNN architecture.

Convolutional neural network based amphibian sound classification using covariance and modulogram (공분산과 모듈로그램을 이용한 콘볼루션 신경망 기반 양서류 울음소리 구별)

  • Ko, Kyungdeuk;Park, Sangwook;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.60-65
    • /
    • 2018
  • In this paper, a covariance matrix and modulogram are proposed for realizing amphibian sound classification using CNN (Convolutional Neural Network). First of all, a database is established by collecting amphibians sounds including endangered species in natural environment. In order to apply the database to CNN, it is necessary to standardize acoustic signals with different lengths. To standardize the acoustic signals, covariance matrix that gives distribution information and modulogram that contains the information about change over time are extracted and used as input to CNN. The experiment is conducted by varying the number of a convolutional layer and a fully-connected layer. For performance assessment, several conventional methods are considered representing various feature extraction and classification approaches. From the results, it is confirmed that convolutional layer has a greater impact on performance than the fully-connected layer. Also, the performance based on CNN shows attaining the highest recognition rate with 99.07 % among the considered methods.

Scalable Video Coding using Super-Resolution based on Convolutional Neural Networks for Video Transmission over Very Narrow-Bandwidth Networks (초협대역 비디오 전송을 위한 심층 신경망 기반 초해상화를 이용한 스케일러블 비디오 코딩)

  • Kim, Dae-Eun;Ki, Sehwan;Kim, Munchurl;Jun, Ki Nam;Baek, Seung Ho;Kim, Dong Hyun;Choi, Jeung Won
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.132-141
    • /
    • 2019
  • The necessity of transmitting video data over a narrow-bandwidth exists steadily despite that video service over broadband is common. In this paper, we propose a scalable video coding framework for low-resolution video transmission over a very narrow-bandwidth network by super-resolution of decoded frames of a base layer using a convolutional neural network based super resolution technique to improve the coding efficiency by using it as a prediction for the enhancement layer. In contrast to the conventional scalable high efficiency video coding (SHVC) standard, in which upscaling is performed with a fixed filter, we propose a scalable video coding framework that replaces the existing fixed up-scaling filter by using the trained convolutional neural network for super-resolution. For this, we proposed a neural network structure with skip connection and residual learning technique and trained it according to the application scenario of the video coding framework. For the application scenario where a video whose resolution is $352{\times}288$ and frame rate is 8fps is encoded at 110kbps, the quality of the proposed scalable video coding framework is higher than that of the SHVC framework.

Convolutional neural network for multi polarization SAR recognition (다중 편광 SAR 영상 목표물 인식을 위한 딥 컨볼루션 뉴럴 네트워크)

  • Youm, Gwang-Young;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.102-104
    • /
    • 2017
  • 최근 Convolutional neural network (CNN)을 도입하여, SAR 영상의 목표물 인식 알고리즘이 높은 성능을 보여주었다. SAR 영상은 4 종류의 polarization 정보로 구성되어있다. 기계와 신호처리의 비용으로 인하여 일부 데이터는 적은 수의 polarization 정보를 가지고 있다. 따라서 우리는 SAR 영상 data 를 멀티모달 데이터로 해석하였다. 그리고 우리는 이러한 멀티모달 데이터에 잘 작동할 수 있는 콘볼루션 신경망을 제안하였다. 우리는 데이터가 포함하는 모달의 수에 반비례 하도록 scale factor 구성하고 이를 입력 크기조절에 사용하였다. 입력의 크기를 조절하여, 네트워크는 특징맵의 크기를 모달의 수와 상관없이 일정하게 유지할 수 있었다. 또한 제안하는 입력 크기조절 방법은 네트워크의 dead filter 의 수를 감소 시켰고, 이는 네트워크가 자신의 capacity 를 잘 활용한다는 것을 의미한다. 또 제안된 네트워크는 특징맵을 구성할 때 다양한 모달을 활용하였고, 이는 네트워크가 모달간의 상관관계를 학습했다는 것을 의미한다. 그 결과, 제안된 네트워크의 성능은 입력 크기조절이 없는 일반적인 네트워크보다 높은 성능을 보여주었다. 또한 우리는 전이학습의 개념을 이용하여 네트워크를 모달의 수가 많은 데이터부터 차례대로 학습시켰다. 전이학습을 통하여 네트워크가 학습되었을 때, 제안된 네트워크는 특정 모달의 조합 경우만을 위해 학습된 네트워크보다 높은 성능을 보여준다.

  • PDF

Automatic Intrapulse Modulated LPI Radar Waveform Identification (펄스 내 변조 저피탐 레이더 신호 자동 식별)

  • Kim, Minjun;Kong, Seung-Hyun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.133-140
    • /
    • 2018
  • In electronic warfare(EW), low probability of intercept(LPI) radar signal is a survival technique. Accordingly, identification techniques of the LPI radar waveform have became significant recently. In this paper, classification and extracting parameters techniques for 7 intrapulse modulated radar signals are introduced. We propose a technique of classifying intrapulse modulated radar signals using Convolutional Neural Network(CNN). The time-frequency image(TFI) obtained from Choi-William Distribution(CWD) is used as the input of CNN without extracting the extra feature of each intrapulse modulated radar signals. In addition a method to extract the intrapulse radar modulation parameters using binary image processing is introduced. We demonstrate the performance of the proposed intrapulse radar waveform identification system. Simulation results show that the classification system achieves a overall correct classification success rate of 90 % or better at SNR = -6 dB and the parameter extraction system has an overall error of less than 10 % at SNR of less than -4 dB.