• Title/Summary/Keyword: SQuAD

Search Result 8, Processing Time 0.018 seconds

A Study on Performance Analysis of MRC Algorithm Using SQuAD (SQuAD를 활용한 MRC 알고리즘 성능 분석 연구)

  • Lim, Jong-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.431-432
    • /
    • 2018
  • MRC(기계독해)는 Passage, Question, Answel 로 이루어진 Dataset 으로 학습된 모델을 사용하여 요청한 Question 의 Answer 를 같이 주어진 Passage 내에서 찾아내는 것을 목적으로 한다. 최근 MRC 시스템의 성능 측정 지표로 활용되는 SQuAD Dataset 을 활용하여 RNN 의 한 분류인 match-LSTM과 R-NET 알고리즘의 성능을 비교 분석하고자 한다.

VS3-NET: Neural variational inference model for machine-reading comprehension

  • Park, Cheoneum;Lee, Changki;Song, Heejun
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.771-781
    • /
    • 2019
  • We propose the VS3-NET model to solve the task of question answering questions with machine-reading comprehension that searches for an appropriate answer in a given context. VS3-NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit-based sentences and self-matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question-type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine-reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3-NET model has an exact-match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.

Q-Net : Machine Reading Comprehension adding Question Type (Q-Net : 질문 유형을 추가한 기계 독해)

  • Kim, Jeong-Moo;Shin, Chang-Uk;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.645-648
    • /
    • 2018
  • 기계 독해는 기계가 주어진 본문을 이해하고 질문에 대한 정답을 본문 내에서 찾아내는 문제이다. 본 논문은 질문 유형을 추가하여 정답 선택에 도움을 주도록 설계하였다. 우리는 Person, Location, Date, Number, Why, How, What, Others와 같이 8개의 질문 유형을 나누고 이들이 본문의 중요 자질들과 Attention이 일어나도록 설계하였다. 제안 방법의 평가를 위해 SQuAD의 한국어 번역 데이터와 한국어 Wikipedia로 구축한 K-QuAD 데이터 셋으로 실험을 진행하였다. 제안한 모델의 실험 결과 부분 일치를 인정하여, EM 84.650%, F1 86.208%로 K-QuAD 제안 논문 실험인 BiDAF 모델보다 더 나은 성능을 얻었다.

  • PDF

Multi-level Attention Fusion Network for Machine Reading Comprehension (Multi-level Attention Fusion을 이용한 기계독해)

  • Park, Kwang-Hyeon;Na, Seung-Hoon;Choi, Yun-Su;Chang, Du-Seong
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.259-262
    • /
    • 2018
  • 기계독해의 목표는 기계가 주어진 문맥을 이해하고 문맥에 대한 질문에 대답할 수 있도록 하는 것이다. 본 논문에서는 Multi-level Attention에 정보를 효율적으로 융합 수 있는 Fusion 함수를 결합하고, Answer module에Stochastic multi-step answer를 적용하여 SQuAD dev 데이터 셋에서 EM=78.63%, F1=86.36%의 성능을 보였다.

  • PDF

I-QANet: Improved Machine Reading Comprehension using Graph Convolutional Networks (I-QANet: 그래프 컨볼루션 네트워크를 활용한 향상된 기계독해)

  • Kim, Jeong-Hoon;Kim, Jun-Yeong;Park, Jun;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1643-1652
    • /
    • 2022
  • Most of the existing machine reading research has used Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) algorithms as networks. Among them, RNN was slow in training, and Question Answering Network (QANet) was announced to improve training speed. QANet is a model composed of CNN and self-attention. CNN extracts semantic and syntactic information well from the local corpus, but there is a limit to extracting the corresponding information from the global corpus. Graph Convolutional Networks (GCN) extracts semantic and syntactic information relatively well from the global corpus. In this paper, to take advantage of this strength of GCN, we propose I-QANet, which changed the CNN of QANet to GCN. The proposed model performed 1.2 times faster than the baseline in the Stanford Question Answering Dataset (SQuAD) dataset and showed 0.2% higher performance in Exact Match (EM) and 0.7% higher in F1. Furthermore, in the Korean Question Answering Dataset (KorQuAD) dataset consisting only of Korean, the learning time was 1.1 times faster than the baseline, and the EM and F1 performance were also 0.9% and 0.7% higher, respectively.

A Study on the Implementation and Performance Verification of DistilBERT in an Embedded System(Raspberry PI 5) Environment (임베디드 시스템(Raspberry PI 5) 환경에서의 DistilBERT 구현 및 성능 검증에 관한 연구)

  • Chae-woo Im;Eun-Ho Kim;Jang-Won Suh
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.617-618
    • /
    • 2024
  • 본 논문에서 핵심적으로 연구할 내용은 기존 논문에서 소개된 BERT-base 모델의 경량화 버전인 DistilBERT 모델을 임베디드 시스템(Raspberry PI 5) 환경에 탑재 및 구현하는 것이다. 또한, 본 논문에서는 임베디드 시스템(Raspberry PI 5) 환경에 탑재한 DistilBERT 모델과 BERT-base 모델 간의 성능 비교를 수행하였다. 성능 평가에 사용한 데이터셋은 SQuAD(Standford Question Answering Dataset)로 질의응답 태스크에 대한 데이터셋이며, 성능 검증 지표로는 EM(Exact Match) Score와 F1 Score 그리고 추론시간을 사용하였다. 실험 결과를 통해 DistilBERT와 같은 경량화 모델이 임베디드 시스템(Raspberry PI 5)과 같은 환경에서 온 디바이스 AI(On-Device AI)로 잘 작동함을 증명하였다.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.