그림 1. 제안된 DNN 구조의 블럭 다이어그램, (a) 의사 라벨 DNN 구조, (b) SED DNN 구조 Fig. 1. Block diagram of the proposed DNN structure, (a) DNN structure for pseudo label, (b) DNN structure for SED
그림 2 . N 스트라이드 1차원 합성곱 계층의 블럭 다이어그램 Fig. 2. Block diagram of the 1D convolution layer with stride N
그림 3. ResGLU-SE 블럭의 블럭 다이어그램 Fig. 3. Block diagram of the ResGLU-SE block
그림 4. DRC 커브 예제(위), 사용된 DRC 커브(아래) Fig. 4. A DRC curve example (above) and the DRC curves used (below)
그림 5. 의사 라벨이 적용된 훈련 블럭 다이어그램 Fig. 5. Block diagram of pseudo label applied training
그림 6. 전반적인 구조의 블럭 다이어그램 Fig. 6. Block diagram for overall structure
그림 7. 스트라이드된 1차 합성곱 계층의 크기 스펙트럼, (a) GLUs 블럭, (b) ResGLU-SE 블럭 Fig. 7. Magnitude spectrum of the strided 1D convolutional layers, (a) Using GLUs block, (b) Using ResGLU-SE block
그림 8. 제안된 알고리즘의 이벤트 별 성능 Fig. 8. Performance of the proposed algorithm per event
그림 9. 훈련 데이터의 스펙트럼 Fig. 9. Spectrum of training data
그림 10. 정답과 추정 결과 Fig. 10. Ground truth and prediction results
표 1. 이벤트 별 수 (약한 라벨) Table 1. Number of clips per event (weak label)
표 2. 데이터 증강 비율과 증강된 클립의 수 Table 2. Ratio of data augmentation and resultant number of augmented clips
표 3. 음향 이벤트 검출 성능 Table 3. Performance of sound event detection
References
- Mesaros, A., Heittola, T, and Virtanen, T, "TUT database for acoustic scene classification and sound event detection," 2016 24th EUSIPCO, Hungary, Budapest, pp.1128-1132, August 2016.
- E. Wold, T. Blum, D. Keislar, and J. Wheaten, "Content-based classification, search, and retrieval of audio," IEEE Multimedia, Vol.3, No.3, pp.27-36, 1996. https://doi.org/10.1109/93.556537
- DENG, Ltsc, et al. "Recent advances in deep learning for speech research at Microsoft," In ICASSP, Vol. 26, pp. 64, May 2013.
- Mun, Seongkyu, et al. "Generative adversarial network based acoustic scene training set augmentation and selection using SVM hyper-plane," Proceeding of DCASE, pp.93-97, 2017.
- Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, "Language modeling with gated convolutional networks," arXiv preprint arXiv preprint arXiv:1612.08083, 2016.
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, "Identity mappings in deep residual networks," In European Conference on Computer Vision (ECCV). Springer, pp.630-645, 2016.
- J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," arXiv preprint arXiv:1709.01507, 2017.
- Hyeongi Moon, Joon Byun, Bum-Jun Kim, Shin-hyuk Jeon, Youngho Jeong, Young-cheol Park and Sung-wook Park, "End-to-end CRNN Architectures for Weakly Supervised Sound Event Detection," DCASE 2018 Challenge, Sep. 2018.
- Tara N. Sainath, Ron J. Weiss, Andrew Senior, Kevin W. Wilson, Oriol Vinyals, "Learning the speech front-end with raw waveform CLDNNs," Procedding of INTERSPEECH, Germany, Dresden, September 2015.
- Yong Xu, Qiuqiang Kong, Wenwu Wang and Mark D. Plumbley, "Large-scale weakly supervised audio classification using gated convolutional neural network," Proceeding of ICASSP, Canada, Calgary, pp.121-125, April 2018.
- Justin Salamon and Juhan Pablo Bello, "Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification," IEEE Signal Processing Letters, pp.279-283, 2017
- J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, "Audio set: An ontology and human-labeled dataset for audio events," Proceeding of ICASSP, USA, New Orleans, pp.776-780, March 2017.
- Mesaros, Annamaria, Toni Heittola, and Tuomas Virtanen, "Metrics for polyphonic sound event detection," Applied Sciences, 6.6: 162, 2016. https://doi.org/10.3390/app6060162
- Romain Serizel, Nicolas Turpault, Hamid Eghbal-Zadeh, Ankit Parag Shah, "Large-Scale Weakly Labeled Semi-Supervised Sound Event Detection in Domestic Environments," arXiv preprint arXiv:1807.10501, 2018.