• Title/Summary/Keyword: 심층 분류기

Search Result 45, Processing Time 0.022 seconds

Neural Architecture Search for Korean Text Classification (한국어 문서 분류를 위한 신경망 구조 탐색)

  • ByoungKyu Ji
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.125-130
    • /
    • 2023
  • 최근 심층 신경망을 활용한 한국어 자연어 처리에 대한 관심이 높아지고 있지만, 한국어 자연어 처리에 적합한 신경망 구조 탐색에 대한 연구는 이뤄지지 않았다. 본 논문에서는 문서 분류 정확도를 보상으로 하는 강화 학습 알고리즘을 이용하여 장단기 기억 신경망으로 한국어 문서 분류에 적합한 심층 신경망 구조를 탐색하였으며, 탐색을 위해 사전 학습한 한국어 임베딩 성능과 탐색한 신경망 구조를 분석하였다. 탐색을 통해 찾아낸 신경망 구조는 기존 한국어 자연어 처리 모델에 대해 4 가지 한국어 문서 분류 과제로 비교하였을 때 일반적으로 성능이 우수하고 모델의 크기가 작아 효율적이었다.

  • PDF

HyperConv: spatio-spectral classication of hyperspectral images with deep convolutional neural networks (심층 컨볼루션 신경망을 사용한 초분광 영상의 공간 분광학적 분류 기법)

  • Ko, Seyoon;Jun, Goo;Won, Joong-Ho
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.5
    • /
    • pp.859-872
    • /
    • 2016
  • Land cover classification is an important tool for preventing natural disasters, collecting environmental information, and monitoring natural resources. Hyperspectral imaging is widely used for this task thanks to sufficient spectral information. However, the curse of dimensionality, spatiotemporal variability, and lack of labeled data make it difficult to classify the land cover correctly. We propose a novel classification framework for land cover classification of hyperspectral data based on convolutional neural networks. The proposed framework naturally incorporates full spectral features with the information from neighboring pixels and has advantages over existing methods that require additional feature extraction or pre-processing steps. Empirical evaluation results show that the proposed framework provides good generalization power with classification accuracies better than (or comparable to) the most advanced existing classifiers.

Facial Local Region Based Deep Convolutional Neural Networks for Automated Face Recognition (자동 얼굴인식을 위한 얼굴 지역 영역 기반 다중 심층 합성곱 신경망 시스템)

  • Kim, Kyeong-Tae;Choi, Jae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.4
    • /
    • pp.47-55
    • /
    • 2018
  • In this paper, we propose a novel face recognition(FR) method that takes advantage of combining weighted deep local features extracted from multiple Deep Convolutional Neural Networks(DCNNs) learned with a set of facial local regions. In the proposed method, the so-called weighed deep local features are generated from multiple DCNNs each trained with a particular face local region and the corresponding weight represents the importance of local region in terms of improving FR performance. Our weighted deep local features are applied to Joint Bayesian metric learning in conjunction with Nearest Neighbor(NN) Classifier for the purpose of FR. Systematic and comparative experiments show that our proposed method is robust to variations in pose, illumination, and expression. Also, experimental results demonstrate that our method is feasible for improving face recognition performance.

Analysis and Study of Internal Learning Trend of Deep Classifier according to Depth (깊이에 따른 중간 단계 분류기 내부 학습 경향 분석 및 고찰)

  • Seong, Su-Jin;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.115-119
    • /
    • 2019
  • 딥러닝 모델은 자동으로 자질을 추출하고 추상화 하기 위해 깊은 은닉층을 가지며, 이전 연구들은 이러한 은닉층을 깊게 쌓는 것이 성능 향상에 기여한다는 것을 증명해왔다. 하지만 데이터나 태스크에 따라 높은 성능을 내는 깊이가 다르고, 모델 깊이 설정에 대한 명확한 근거가 부족하다. 본 논문은 데이터 셋에 따라 적합한 깊이가 다르다고 가정하고, 이를 확인하기 위해 모델 내부에 분류기를 추가하여 모델 내부의 학습 경향을 확인하였다. 그 결과 태스크나 입력의 특성에 따라 필요로 하는 깊이에 차이가 있음을 발견하였고, 이를 근거로 가변적으로 깊이를 선택하여 모델의 출력을 조절하여 그 결과 성능이 향상됨을 확인하였다.

  • PDF

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Methodology for Classifying Hierarchical Data Using Autoencoder-based Deeply Supervised Network (오토인코더 기반 심층 지도 네트워크를 활용한 계층형 데이터 분류 방법론)

  • Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.185-207
    • /
    • 2022
  • Recently, with the development of deep learning technology, researches to apply a deep learning algorithm to analyze unstructured data such as text and images are being actively conducted. Text classification has been studied for a long time in academia and industry, and various attempts are being performed to utilize data characteristics to improve classification performance. In particular, a hierarchical relationship of labels has been utilized for hierarchical classification. However, the top-down approach mainly used for hierarchical classification has a limitation that misclassification at a higher level blocks the opportunity for correct classification at a lower level. Therefore, in this study, we propose a methodology for classifying hierarchical data using the autoencoder-based deeply supervised network that high-level classification does not block the low-level classification while considering the hierarchical relationship of labels. The proposed methodology adds a main classifier that predicts a low-level label to the autoencoder's latent variable and an auxiliary classifier that predicts a high-level label to the hidden layer of the autoencoder. As a result of experiments on 22,512 academic papers to evaluate the performance of the proposed methodology, it was confirmed that the proposed model showed superior classification accuracy and F1-score compared to the traditional supervised autoencoder and DNN model.

Study on Image Use for Plant Disease Classification (작물의 병충해 분류를 위한 이미지 활용 방법 연구)

  • Jeong, Seong-Ho;Han, Jeong-Eun;Jeong, Seong-Kyun;Bong, Jae-Hwan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.343-350
    • /
    • 2022
  • It is worth verifying the effectiveness of data integration between data with different features. This study investigated whether the data integration affects the accuracy of deep neural network (DNN), and which integration method shows the best improvement. This study used two different public datasets. One public dataset was taken in an actual farm in India. And another was taken in a laboratory environment in Korea. Leaf images were selected from two different public datasets to have five classes which includes normal and four different types of plant diseases. DNN used pre-trained VGG16 as a feature extractor and multi-layer perceptron as a classifier. Data were integrated into three different ways to be used for the training process. DNN was trained in a supervised manner via the integrated data. The trained DNN was evaluated by using a test dataset taken in an actual farm. DNN shows the best accuracy for the test dataset when DNN was first trained by images taken in the laboratory environment and then trained by images taken in the actual farm. The results show that data integration between plant images taken in a different environment helps improve the performance of deep neural networks. And the results also confirmed that independent use of plant images taken in different environments during the training process is more effective in improving the performance of DNN.

BSR (Buzz, Squeak, Rattle) noise classification based on convolutional neural network with short-time Fourier transform noise-map (Short-time Fourier transform 소음맵을 이용한 컨볼루션 기반 BSR (Buzz, Squeak, Rattle) 소음 분류)

  • Bu, Seok-Jun;Moon, Se-Min;Cho, Sung-Bae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.4
    • /
    • pp.256-261
    • /
    • 2018
  • There are three types of noise generated inside the vehicle: BSR (Buzz, Squeak, Rattle). In this paper, we propose a classifier that automatically classifies automotive BSR noise by using features extracted from deep convolutional neural networks. In the preprocessing process, the features of above three noises are represented as noise-map using STFT (Short-time Fourier Transform) algorithm. In order to cope with the problem that the position of the actual noise is unknown in the part of the generated noise map, the noise map is divided using the sliding window method. In this paper, internal parameter of the deep convolutional neural networks is visualized using the t-SNE (t-Stochastic Neighbor Embedding) algorithm, and the misclassified data is analyzed in a qualitative way. In order to analyze the classified data, the similarity of the noise type was quantified by SSIM (Structural Similarity Index) value, and it was found that the retractor tremble sound is most similar to the normal travel sound. The classifier of the proposed method compared with other classifiers of machine learning method recorded the highest classification accuracy (99.15 %).

Extraction of Temporal and Spectral Features based on Spikegram for Music Genre Classification (음악 장르 분류를 위한 스파이크그램 기반의 시간 및 주파수 특성 추출 기술)

  • Jang, Won;Cho, Hyo-Jin;Shin, Seong-Hyeon;Park, Hochong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.49-50
    • /
    • 2018
  • 본 논문에서는 음악 장르 분류를 위한 시간 및 주파수 기반 스파이크그램 특성 추출 기술을 제안한다. 기존의 음악 장르 분류 시스템에서는 푸리에 변환 기반의 입력 특성을 주로 사용해 왔다. 푸리에 변환은 시간 축에서 프레임 단위로 평균적인 주파수 정보를 취하므로 낮은 시간 해상도를 갖지만, 스파이크그램은 샘플 단위의 주파수 정보를 갖고 있어 고해상도의 특성을 추출할 수 있다. 제안하는 기술은 이러한 시간 기반 특성을 추출하여 주파수 기반 특성 및 SNR 특성과 함께 심층 신경망의 입력으로 사용한다. 제안하는 특성을 사용하여 시간 기반 특성을 사용하지 않은 기존 스파이크그램 특성 기반 분류기의 성능을 개선하였으며, 다른 특성 및 분류기에 비해 적은 수의 특성 입력으로도 우수한 성능을 얻는 것을 확인하였다.

  • PDF

Tempo-oriented music recommendation system based on human activity recognition using accelerometer and gyroscope data (가속도계와 자이로스코프 데이터를 사용한 인간 행동 인식 기반의 템포 지향 음악 추천 시스템)

  • Shin, Seung-Su;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.286-291
    • /
    • 2020
  • In this paper, we propose a system that recommends music through tempo-oriented music classification and sensor-based human activity recognition. The proposed method indexes music files using tempo-oriented music classification and recommends suitable music according to the recognized user's activity. For accurate music classification, a dynamic classification based on a modulation spectrum and a sequence classification based on a Mel-spectrogram are used in combination. In addition, simple accelerometer and gyroscope sensor data of the smartphone are applied to deep spiking neural networks to improve activity recognition performance. Finally, music recommendation is performed through a mapping table considering the relationship between the recognized activity and the indexed music file. The experimental results show that the proposed system is suitable for use in any practical mobile device with a music player.