• Title/Summary/Keyword: sequence learning

Search Result 464, Processing Time 0.022 seconds

Discriminative Training of Sequence Taggers via Local Feature Matching

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.209-215
    • /
    • 2014
  • Sequence tagging is the task of predicting frame-wise labels for a given input sequence and has important applications to diverse domains. Conventional methods such as maximum likelihood (ML) learning matches global features in empirical and model distributions, rather than local features, which directly translates into frame-wise prediction errors. Recent probabilistic sequence models such as conditional random fields (CRFs) have achieved great success in a variety of situations. In this paper, we introduce a novel discriminative CRF learning algorithm to minimize local feature mismatches. Unlike overall data fitting originating from global feature matching in ML learning, our approach reduces the total error over all frames in a sequence. We also provide an efficient gradient-based learning method via gradient forward-backward recursion, which requires the same computational complexity as ML learning. For several real-world sequence tagging problems, we empirically demonstrate that the proposed learning algorithm achieves significantly more accurate prediction performance than standard estimators.

Korean morphological analysis and phrase structure parsing using multi-task sequence-to-sequence learning (Multi-task sequence-to-sequence learning을 이용한 한국어 형태소 분석과 구구조 구문 분석)

  • Hwang, Hyunsun;Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.103-107
    • /
    • 2017
  • 한국어 형태소 분석 및 구구조 구문 분석은 한국어 자연어처리에서 난이도가 높은 작업들로서 최근에는 해당 문제들을 출력열 생성 문제로 바꾸어 sequence-to-sequence 모델을 이용한 end-to-end 방식의 접근법들이 연구되었다. 한국어 형태소 분석 및 구구조 구문 분석을 출력열 생성 문제로 바꿀 시 해당 출력 결과는 하나의 열로서 합쳐질 수가 있다. 본 논문에서는 sequence-to-sequence 모델을 이용하여 한국어 형태소 분석 및 구구조 구문 분석을 동시에 처리하는 모델을 제안한다. 실험 결과 한국어 형태소 분석과 구구조 구문 분석을 동시에 처리할 시 형태소 분석이 구구조 구문 분석에 영향을 주는 것을 확인 하였으며, 구구조 구문 분석 또한 형태소 분석에 영향을 주어 서로 영향을 줄 수 있음을 확인하였다.

  • PDF

Korean morphological analysis and phrase structure parsing using multi-task sequence-to-sequence learning (Multi-task sequence-to-sequence learning을 이용한 한국어 형태소 분석과 구구조 구문 분석)

  • Hwang, Hyunsun;Lee, Changki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.103-107
    • /
    • 2017
  • 한국어 형태소 분석 및 구구조 구문 분석은 한국어 자연어처리에서 난이도가 높은 작업들로서 최근에는 해당 문제들을 출력열 생성 문제로 바꾸어 sequence-to-sequence 모델을 이용한 end-to-end 방식의 접근법들이 연구되었다. 한국어 형태소 분석 및 구구조 구문 분석을 출력열 생성 문제로 바꿀 시 해당 출력 결과는 하나의 열로서 합쳐질 수가 있다. 본 논문에서는 sequence-to-sequence 모델을 이용하여 한국어 형태소 분석 및 구구조 구문 분석을 동시에 처리하는 모델을 제안한다. 실험 결과 한국어 형태소 분석과 구구조 구문 분석을 동시에 처리할 시 형태소 분석이 구구조 구문 분석에 영향을 주는 것을 확인 하였으며, 구구조 구문 분석 또한 형태소 분석에 영향을 주어 서로 영향을 줄 수 있음을 확인하였다.

  • PDF

Korean phrase structure parsing using sequence-to-sequence learning (Sequence-to-sequence 모델을 이용한 한국어 구구조 구문 분석)

  • Hwang, Hyunsun;Lee, Changki
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.20-24
    • /
    • 2016
  • Sequence-to-sequence 모델은 입력열을 길이가 다른 출력열로 변환하는 모델로, 단일 신경망 구조만을 사용하는 End-to-end 방식의 모델이다. 본 논문에서는 Sequence-to-sequence 모델을 한국어 구구조 구문 분석에 적용한다. 이를 위해 구구조 구문 트리를 괄호와 구문 태그 및 어절로 이루어진 출력열의 형태로 만들고 어절들을 단일 기호 'XX'로 치환하여 출력 단어 사전의 수를 줄였다. 그리고 최근 기계번역의 성능을 높이기 위해 연구된 Attention mechanism과 Input-feeding을 적용하였다. 실험 결과, 세종말뭉치의 구구조 구문 분석 데이터에 대해 기존의 연구보다 높은 F1 89.03%의 성능을 보였다.

  • PDF

Korean phrase structure parsing using sequence-to-sequence learning (Sequence-to-sequence 모델을 이용한 한국어 구구조 구문 분석)

  • Hwang, Hyunsun;Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.20-24
    • /
    • 2016
  • Sequence-to-sequence 모델은 입력열을 길이가 다른 출력열로 변환하는 모델로, 단일 신경망 구조만을 사용하는 End-to-end 방식의 모델이다. 본 논문에서는 Sequence-to-sequence 모델을 한국어 구구조 구문 분석에 적용한다. 이를 위해 구구조 구문 트리를 괄호와 구문 태그 및 어절로 이루어진 출력열의 형태로 만들고 어절들을 단일 기호 'XX'로 치환하여 출력 단어 사전의 수를 줄였다. 그리고 최근 기계번역의 성능을 높이기 위해 연구된 Attention mechanism과 Input-feeding을 적용하였다. 실험 결과, 세종말뭉치의 구구조 구문 분석 데이터에 대해 기존의 연구보다 높은 F1 89.03%의 성능을 보였다.

  • PDF

Online Selective-Sample Learning of Hidden Markov Models for Sequence Classification

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.145-152
    • /
    • 2015
  • We consider an online selective-sample learning problem for sequence classification, where the goal is to learn a predictive model using a stream of data samples whose class labels can be selectively queried by the algorithm. Given that there is a limit to the total number of queries permitted, the key issue is choosing the most informative and salient samples for their class labels to be queried. Recently, several aggressive selective-sample algorithms have been proposed under a linear model for static (non-sequential) binary classification. We extend the idea to hidden Markov models for multi-class sequence classification by introducing reasonable measures for the novelty and prediction confidence of the incoming sample with respect to the current model, on which the query decision is based. For several sequence classification datasets/tasks in online learning setups, we demonstrate the effectiveness of the proposed approach.

Application of sequence to sequence learning based LSTM model (LSTM-s2s) for forecasting dam inflow (Sequence to Sequence based LSTM (LSTM-s2s)모형을 이용한 댐유입량 예측에 대한 연구)

  • Han, Heechan;Choi, Changhyun;Jung, Jaewon;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.3
    • /
    • pp.157-166
    • /
    • 2021
  • Forecasting dam inflow based on high reliability is required for efficient dam operation. In this study, deep learning technique, which is one of the data-driven methods and has been used in many fields of research, was manipulated to predict the dam inflow. The Long Short-Term Memory deep learning with Sequence-to-Sequence model (LSTM-s2s), which provides high performance in predicting time-series data, was applied for forecasting inflow of Soyang River dam. Various statistical metrics or evaluation indicators, including correlation coefficient (CC), Nash-Sutcliffe efficiency coefficient (NSE), percent bias (PBIAS), and error in peak value (PE), were used to evaluate the predictive performance of the model. The result of this study presented that the LSTM-s2s model showed high accuracy in the prediction of dam inflow and also provided good performance for runoff event based runoff prediction. It was found that the deep learning based approach could be used for efficient dam operation for water resource management during wet and dry seasons.

Misunderstandings and Difficulties in Learning Sequence and Series: A Case Study

  • Akgun, Levent;Duru, Adem
    • Research in Mathematical Education
    • /
    • v.11 no.1
    • /
    • pp.75-85
    • /
    • 2007
  • This paper analyzes the difficulties with the learning of sequence and series of the second-year students who participated in a year long whole class at the university level. The research was carried out at the end of students' third semester. These students were randomly selected. They were applied to one paper and pencil test containing eight task items on sequence and series. In this study, qualitative method (case study design) was used to explore students' difficulties and misunderstandings in learning sequence and series. Students' responses to the questions were divided into three categories: These were "correct", "partial correct" and "false or no responses". Students' responses to the paper and pencil test were evaluated. The results show that students had difficulties and misunderstandings in series and sequence.

  • PDF

Could Decimal-binary Vector be a Representative of DNA Sequence for Classification?

  • Sanjaya, Prima;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.5 no.3
    • /
    • pp.8-15
    • /
    • 2016
  • In recent years, one of deep learning models called Deep Belief Network (DBN) which formed by stacking restricted Boltzman machine in a greedy fashion has beed widely used for classification and recognition. With an ability to extracting features of high-level abstraction and deal with higher dimensional data structure, this model has ouperformed outstanding result on image and speech recognition. In this research, we assess the applicability of deep learning in dna classification level. Since the training phase of DBN is costly expensive, specially if deals with DNA sequence with thousand of variables, we introduce a new encoding method, using decimal-binary vector to represent the sequence as input to the model, thereafter compare with one-hot-vector encoding in two datasets. We evaluated our proposed model with different contrastive algorithms which achieved significant improvement for the training speed with comparable classification result. This result has shown a potential of using decimal-binary vector on DBN for DNA sequence to solve other sequence problem in bioinformatics.

A Design Method of Discrete Time Learning Control System (이산시간 학습제어 시스템의 설계법)

  • 최순철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.13 no.5
    • /
    • pp.422-428
    • /
    • 1988
  • An iterative learning control system is a control system which makes system outputs follow desired outputs by iterating its trials over a finite time interval. In a discrete time system, we proposed one method in which present control inputs can be obtained by a linear combination of the input sequence and time-shifted error sequence at previous trial. In contrast with a continous time learning control system which needs differential opreration of an error signal, the time shift operation of the error sequence is simpler in a computer control system and its effectiveness is shown by a simulation.

  • PDF