• Title/Summary/Keyword: natural language question

Search Result 85, Processing Time 0.034 seconds

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

Developing and Pre-Processing a Dataset using a Rhetorical Relation to Build a Question-Answering System based on an Unsupervised Learning Approach

  • Dutta, Ashit Kumar;Wahab sait, Abdul Rahaman;Keshta, Ismail Mohamed;Elhalles, Abheer
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.199-206
    • /
    • 2021
  • Rhetorical relations between two text fragments are essential information and support natural language processing applications such as Question - Answering (QA) system and automatic text summarization to produce an effective outcome. Question - Answering (QA) system facilitates users to retrieve a meaningful response. There is a demand for rhetorical relation based datasets to develop such a system to interpret and respond to user requests. There are a limited number of datasets for developing an Arabic QA system. Thus, there is a lack of an effective QA system in the Arabic language. Recent research works reveal that unsupervised learning can support the QA system to reply to users queries. In this study, researchers intend to develop a rhetorical relation based dataset for implementing unsupervised learning applications. A web crawler is developed to crawl Arabic content from the web. A discourse-annotated corpus is generated using the rhetorical structural theory. A Naïve Bayes based QA system is developed to evaluate the performance of datasets. The outcome shows that the performance of the QA system is improved with proposed dataset and able to answer user queries with an appropriate response. In addition, the results on fine-grained and coarse-grained relations reveal that the dataset is highly reliable.

Biaffine Dependency Parser for Korean (Biaffine 한국어 의존파서)

  • Shadikhodjaev, Uygun;Min, Tae Hong;Youn, Junyoung;Lee, Jae Sung
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.678-681
    • /
    • 2018
  • Dependency parsing is an important task in natural language processing whose results are used in many downstream tasks such as machine translation, information retrieval, relation extraction, question answering and many others. Most of the dependency parsing literature focuses on using end-to-end and sequence-to-sequence neural architectures as the core of the system. One such system, namely Biaffine dependency parser is explored in the current paper for effective dependency parsing of Korean language.

  • PDF

Question, Document, Response Validator for Question Answering System (질의 응답 시스템을 위한 질의, 문서, 답변 검증기)

  • Tae Hong Min;Jae Hong Lee;Soo Kyo In;Kiyoon Moon;Hwiyeol Jo;Kyungduk Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.604-607
    • /
    • 2022
  • 본 논문은 사용자의 질의에 대한 답변을 제공하는 질의 응답 시스템에서, 제공하는 답변이 사용자의 질의에 대하여 문서에 근거하여 올바르게 대답하였는지 검증하는 QDR validator에 대해 기술한 논문이다. 본 논문의 과제는 문서에 대한 주장을 판별하는 자연어 추론(Natural Language inference, NLI)와 유사한 과제이지만, 문서(D)와 주장(R)을 포함하여 질의(Q)까지 총 3가지 종류의 입력을 받아 NLI 과제보다 난도가 높다. QDR validation 과제를 수행하기 위하여, 약 16,000 건 데이터를 생성하였으며, 다양한 입력 형식 실험 및 NLI 과제 데이터 추가 학습, 임계 값 조절 실험을 통해 최종 83.05% 우수한 성능을 기록하였다

  • PDF

Bilinear Graph Neural Network-Based Reasoning for Multi-Hop Question Answering (다중 홉 질문 응답을 위한 쌍 선형 그래프 신경망 기반 추론)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.243-250
    • /
    • 2020
  • Knowledge graph-based question answering not only requires deep understanding of the given natural language questions, but it also needs effective reasoning to find the correct answers on a large knowledge graph. In this paper, we propose a deep neural network model for effective reasoning on a knowledge graph, which can find correct answers to complex questions requiring multi-hop inference. The proposed model makes use of highly expressive bilinear graph neural network (BGNN), which can utilize context information between a pair of neighboring nodes, as well as allows bidirectional feature propagation between each entity node and one of its neighboring nodes on a knowledge graph. Performing experiments with an open-domain knowledge base (Freebase) and two natural-language question answering benchmark datasets(WebQuestionsSP and MetaQA), we demonstrate the effectiveness and performance of the proposed model.

A Question Type Classifier Using a Support Vector Machine (지지 벡터 기계를 이용한 질의 유형 분류기)

  • An, Young-Hun;Kim, Hark-Soo;Seo, Jung-Yun
    • Annual Conference on Human and Language Technology
    • /
    • 2002.10e
    • /
    • pp.129-136
    • /
    • 2002
  • 고성능의 질의응답 시스템을 구현하기 위해서는 사용자의 질의 유형의 난이도에 관계없이 의도를 파악할 수 있는 질의유형 분류기가 필요하다. 본 논문에서는 문서 범주화 기법을 이용한 질의 유형 분류기를 제안한다. 본 논문에서 제안하는 질의 유형 분류기의 분류 과정은 다음과 같다. 우선, 사용자 질의에 포함된 어휘, 품사, 의미표지와 같은 다양한 정보를 이용하여 사용자 질의로부터 자질들을 추출한다. 이 과정에서 질의의 구문 특성을 반영하기 위해서 슬라이딩 윈도 기법을 이용한다. 또한, 다량의 자질들 중에서 유용한 것들만을 선택하기 위해서 카이 제곱 통계량을 이용한다. 추출된 자질들은 벡터 공간 모델로 표현되고, 문서 범주화 기법 중 하나인 지지 벡터 기계(support vector machine, SVM)는 이 정보들을 이용하여 질의 유형을 분류한다. 본 논문에서 제안하는 시스템은 질의 유형 분류 문제에지지 벡터 기계를 이용한 자동문서 범주화 기법을 도입하여 86.4%의 높은 분류 정확도를 보였다. 또한 질의 유형 분류기를 통계적 방법으로 구축함으로써 lexico-syntactic 패턴과 같은 규칙을 기술하는 수작업을 배제할 수 있으며, 응용 영역의 변화에 대해서도 안정적인 처리와 빠른 이식성을 보장한다.

  • PDF

From Montague Grammar to Database Semantics

  • Hausser, Roland
    • Language and Information
    • /
    • v.19 no.2
    • /
    • pp.1-18
    • /
    • 2015
  • This paper retraces the development of Database Semantics (DBS) from its beginnings in Montague grammar. It describes the changes over the course of four decades and explains why they were seen to be necessary. DBS was designed to answer the central theoretical question for building a talking robot: How does the mechanism of natural language communication work? For doing what is requested and reporting what is going on, a talking robot requires not only language but also non-language cognition. The contents of non-language cognition are re-used as the meanings of the language surfaces. Robot-externally, DBS handles the language-based transfer of content by using nothing but modality-dependent unanalyzed external surfaces such as sound shapes or dots on paper, produced in the speak mode and recognized n the hear mode. Robot-internally, DBS reconstructs cognition by integrating linguistic notions like functor-argument and coordination, philosophical notions like concept-, pointer-, and baptism-based reference, and notions of computer science like input-output, interface, data structure, algorithm, database schema, and functional flow.

  • PDF

Experimental Analysis of Correct Answer Characteristics in Question Answering Systems (질의응답시스템에서 정답 특징에 관한 실험적 분석)

  • Han, Kyoung-Soo
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.927-933
    • /
    • 2018
  • One of the factors that have the greatest influence on the error of the question answering system that finds and provides answers to natural language questions is the step of searching for documents or passages that contain correct answers. In order to improve the retrieval performance, it is necessary to understand the characteristics of documents and passages containing correct answers. This paper experimentally analyzes how many question words appear in the correct answer documents, how the location of the question word is distributed, and how the topic of the question and the correct answer document are similar using the corpus composed of the question, the documents with correct answer, and the documents without correct answer. This study explains the causes of previous search research results for question answer system and discusses the necessary elements of effective search step.

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.

A Query Classification Method for Question Answering on a Large-Scale Text Data (대규모 문서 데이터 집합에서 Q&A를 위한 질의문 분류 기법)

  • 엄재홍;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.253-255
    • /
    • 2000
  • 어떠한 질문에 대한 구체적 해답을 얻고 싶은 경우, 일반적인 정보 검색이 가지는 문제점은 검색 결과가 사용자가 찾고자 하는 답이라 하기 보다는 해답을 포함하는(또는 포함하지 않는) 문서의 집합이라는 점이다. 사용자가 후보문서를 모두 읽을 필요 없이 빠르게 원하는 정보를 얻기 위해서는 검색의 결과로 문서집합을 제시하기 보다는 실제 원하는 답을 제공하는 시스템의 필요성이 대두된다. 이를 위해 기존의 TF-IDF(Term Frequency-Inversed Document Frequency)기반의 정보검색의 방삭에 자연언어처리(Natural Language Processing)를 이용한 질문의 분류와 문서의 사전 표지(Tagging)를 사용할 수 있다. 본 연구에서는 매년 NIST(National Institute of Standards & Technology)와 DARPA(Defense Advanced Research Projects Agency)주관으로 열리는 TREC(Text REtrieval Conference)중 1999년에 열린 TREC-8의 사용자의 질문(Question)에 대한 답(Answer)을 찾는 ‘Question & Answer’문제의 실험 환경에서 질문을 특징별로 분류하고 검색 대상의 문서에 대한 사전 표지를 이용한 정보검색 시스템으로 사용자의 질문(Question)에 대한 해답을 보다 정확하고 효율적으로 제시할 수 있음을 실험을 통하여 보인다.

  • PDF