• Title/Summary/Keyword: layer normalization

Search Result 42, Processing Time 0.022 seconds

Layer Normalized LSTM CRFs for Korean Semantic Role Labeling (Layer Normalized LSTM CRF를 이용한 한국어 의미역 결정)

  • Park, Kwang-Hyeon;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.163-166
    • /
    • 2017
  • 딥러닝은 모델이 복잡해질수록 Train 시간이 오래 걸리는 작업이다. Layer Normalization은 Train 시간을 줄이고, layer를 정규화 함으로써 성능을 개선할 수 있는 방법이다. 본 논문에서는 한국어 의미역 결정을 위해 Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델을 제안한다. 실험 결과, Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델은 한국어 의미역 결정 논항 인식 및 분류(AIC)에서 성능을 개선시켰다.

  • PDF

Layer Normalized LSTM CRFs for Korean Semantic Role Labeling (Layer Normalized LSTM CRF를 이용한 한국어 의미역 결정)

  • Park, Kwang-Hyeon;Na, Seung-Hoon
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.163-166
    • /
    • 2017
  • 딥러닝은 모델이 복잡해질수록 Train 시간이 오래 걸리는 작업이다. Layer Normalization은 Train 시간을 줄이고, layer를 정규화 함으로써 성능을 개선할 수 있는 방법이다. 본 논문에서는 한국어 의미역 결정을 위해 Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델을 제안한다. 실험 결과, Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델은 한국어 의미역 결정 논항 인식 및 분류(AIC)에서 성능을 개선시켰다.

  • PDF

Analysis of normalization effect for earthquake events classification (지진 이벤트 분류를 위한 정규화 기법 분석)

  • Zhang, Shou;Ku, Bonhwa;Ko, Hansoek
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.130-138
    • /
    • 2021
  • This paper presents an effective structure by applying various normalization to Convolutional Neural Networks (CNN) for seismic event classification. Normalization techniques can not only improve the learning speed of neural networks, but also show robustness to noise. In this paper, we analyze the effect of input data normalization and hidden layer normalization on the deep learning model for seismic event classification. In addition an effective model is derived through various experiments according to the structure of the applied hidden layer. As a result of various experiments, the model that applied input data normalization and weight normalization to the first hidden layer showed the most stable performance improvement.

Verification of Transliteration Pairs Using Distance LSTM-CNN with Layer Normalization (Distance LSTM-CNN with Layer Normalization을 이용한 음차 표기 대역 쌍 판별)

  • Lee, Changsu;Cheon, Juryong;Kim, Joogeun;Kim, Taeil;Kang, Inho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.76-81
    • /
    • 2017
  • 외국어로 구성된 용어를 발음에 기반하여 자국의 언어로 표기하는 것을 음차 표기라 한다. 국가 간의 경계가 허물어짐에 따라, 외국어에 기원을 두는 용어를 설명하기 위해 뉴스 등 다양한 웹 문서에서는 동일한 발음을 가지는 외국어 표기와 한국어 표기를 혼용하여 사용하고 있다. 이에 좋은 검색 결과를 가져오기 위해서는 외국어 표기와 더불어 사람들이 많이 사용하는 다양한 음차 표기를 함께 검색에 활용하는 것이 중요하다. 음차 표기 모델과 음차 표기 대역 쌍 추출을 통해 음차 표현을 생성하는 기존 방법 대신, 본 논문에서는 신뢰할 수 있는 다양한 음차 표현을 찾기 위해 문서에서 음차 표기 후보를 찾고, 이 음차 표기 후보가 정확한 표기인지 판별하는 방식을 제안한다. 다양한 딥러닝 모델을 비교, 검토하여 최종적으로 음차 표기 대역 쌍 판별에 특화된 모델인 Distance LSTM-CNN 모델을 제안하며, 제안하는 모델의 Batch Size 영향을 줄이고 학습 시 수렴 속도 개선을 위해 Layer Normalization을 적용하는 방법을 보인다.

  • PDF

Location-Based Saliency Maps from a Fully Connected Layer using Multi-Shapes

  • Kim, Hoseung;Han, Seong-Soo;Jeong, Chang-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.166-179
    • /
    • 2021
  • Recently, with the development of technology, computer vision research based on the human visual system has been actively conducted. Saliency maps have been used to highlight areas that are visually interesting within the image, but they can suffer from low performance due to external factors, such as an indistinct background or light source. In this study, existing color, brightness, and contrast feature maps are subjected to multiple shape and orientation filters and then connected to a fully connected layer to determine pixel intensities within the image based on location-based weights. The proposed method demonstrates better performance in separating the background from the area of interest in terms of color and brightness in the presence of external elements and noise. Location-based weight normalization is also effective in removing pixels with high intensity that are outside of the image or in non-interest regions. Our proposed method also demonstrates that multi-filter normalization can be processed faster using parallel processing.

An Improved Image Classification Using Batch Normalization and CNN (배치 정규화와 CNN을 이용한 개선된 영상분류 방법)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.35-42
    • /
    • 2018
  • Deep learning is known as a method of high accuracy among several methods for image classification. In this paper, we propose a method of enhancing the accuracy of image classification using CNN with a batch normalization method for classification of images using deep CNN (Convolutional Neural Network). In this paper, we propose a method to add a batch normalization layer to existing neural networks to enhance the accuracy of image classification. Batch normalization is a method to calculate and move the average and variance of each batch for reducing the deflection in each layer. In order to prove the superiority of the proposed method, Accuracy and mAP are measured by image classification experiments using five image data sets SHREC13, MNIST, SVHN, CIFAR-10, and CIFAR-100. Experimental results showed that the CNN with batch normalization is better classification accuracy and mAP rather than using the conventional CNN.

Ovarian Cancer Microarray Data Classification System Using Marker Genes Based on Normalization (표준화 기반 표지 유전자를 이용한 난소암 마이크로어레이 데이타 분류 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.9
    • /
    • pp.2032-2037
    • /
    • 2011
  • Marker genes are defined as genes in which the expression level characterizes a specific experimental condition. Such genes in which the expression levels differ significantly between different groups are highly informative relevant to the studied phenomenon. In this paper, first the system can detect marker genes that are selected by ranking genes according to statistics after normalizing data with methods that are the most widely used among several normalization methods proposed the while, And it compare and analyze a performance of each of normalization methods with mult-perceptron neural network layer. The Result that apply Multi-Layer perceptron algorithm at Microarray data set including eight of marker gene that are selected using ANOVA method after Lowess normalization represent the highest classification accuracy of 99.32% and the lowest prediction error estimate.

Verification of Transliteration Pairs Using Distance LSTM-CNN with Layer Normalization (Distance LSTM-CNN with Layer Normalization을 이용한 음차 표기 대역 쌍 판별)

  • Lee, Changsu;Cheon, Juryong;Kim, Joogeun;Kim, Taeil;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.76-81
    • /
    • 2017
  • 외국어로 구성된 용어를 발음에 기반하여 자국의 언어로 표기하는 것을 음차 표기라 한다. 국가 간의 경계가 허물어짐에 따라, 외국어에 기원을 두는 용어를 설명하기 위해 뉴스 등 다양한 웹 문서에서는 동일한 발음을 가지는 외국어 표기와 한국어 표기를 혼용하여 사용하고 있다. 이에 좋은 검색 결과를 가져오기 위해서는 외국어 표기와 더불어 사람들이 많이 사용하는 다양한 음차 표기를 함께 검색에 활용하는 것이 중요하다. 음차 표기 모델과 음차 표기 대역 쌍 추출을 통해 음차 표현을 생성하는 기존 방법 대신, 본 논문에서는 신뢰할 수 있는 다양한 음차 표현을 찾기 위해 문서에서 음차 표기 후보를 찾고, 이 음차 표기 후보가 정확한 표기인지 판별하는 방식을 제안한다. 다양한 딥러닝 모델을 비교, 검토하여 최종적으로 음차 표기 대역 쌍 판별에 특화된 모델인 Distance LSTM-CNN 모델을 제안하며, 제안하는 모델의 Batch Size 영향을 줄이고 학습 시 수렴 속도 개선을 위해 Layer Normalization을 적용하는 방법을 보인다.

  • PDF

The Design Of Microarray Classification System Using Combination Of Significant Gene Selection Method Based On Normalization. (표준화 기반 유의한 유전자 선택 방법 조합을 이용한 마이크로어레이 분류 시스템 설계)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2259-2264
    • /
    • 2008
  • Significant genes are defined as genes in which the expression level characterizes a specific experimental condition. Such genes in which the expression levels differ significantly between different groups are highly informative relevant to the studied phenomenon. In this paper, first the system can detect informative genes by similarity scale combination method being proposed in this paper after normalizing data with methods that are the most widely used among several normalization methods proposed the while. And it compare and analyze a performance of each of normalization methods with multi-perceptron neural network layer. The Result classifying in Multi-Perceptron neural network classifier for selected 200 genes using combination of PC(Pearson correlation coefficient) and ED(Euclidean distance coefficient) after Lowess normalization represented the improved classification performance of 98.84%.

Study on Improving Learning Speed of Artificial Neural Network Model for Ammunition Stockpile Reliability Classification (저장탄약 신뢰성분류 인공신경망모델의 학습속도 향상에 관한 연구)

  • Lee, Dong-Nyok;Yoon, Keun-Sig;Noh, Yoo-Chan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.374-382
    • /
    • 2020
  • The purpose of this study is to improve the learning speed of an ammunition stockpile reliability classification artificial neural network model by proposing a normalization method that reduces the number of input variables based on the characteristic of Ammunition Stockpile Reliability Program (ASRP) data without loss of classification performance. Ammunition's performance requirements are specified in the Korea Defense Specification (KDS) and Ammunition Stockpile reliability Test Procedure (ASTP). Based on the characteristic of the ASRP data, input variables can be normalized to estimate the lot percent nonconforming or failure rate. To maintain the unitary hypercube condition of the input variables, min-max normalization method is also used. Area Under the ROC Curve (AUC) of general min-max normalization and proposed 2-step normalization is over 0.95 and speed-up for marching learning based on ASRP field data is improved 1.74 ~ 1.99 times depending on the numbers of training data and of hidden layer's node.