• Title/Summary/Keyword: Answer Type

Search Result 325, Processing Time 0.028 seconds

Restricting Answer Candidates Based on Taxonomic Relatedness of Integrated Lexical Knowledge Base in Question Answering

  • Heo, Jeong;Lee, Hyung-Jik;Wang, Ji-Hyun;Bae, Yong-Jin;Kim, Hyun-Ki;Ock, Cheol-Young
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.191-201
    • /
    • 2017
  • This paper proposes an approach using taxonomic relatedness for answer-type recognition and type coercion in a question-answering system. We introduce a question analysis method for a lexical answer type (LAT) and semantic answer type (SAT) and describe the construction of a taxonomy linking them. We also analyze the effectiveness of type coercion based on the taxonomic relatedness of both ATs. Compared with the rule-based approach of IBM's Watson, our LAT detector, which combines rule-based and machine-learning approaches, achieves an 11.04% recall improvement without a sharp decline in precision. Our SAT classifier with a relatedness-based validation method achieves a precision of 73.55%. For type coercion using the taxonomic relatedness between both ATs and answer candidates, we construct an answer-type taxonomy that has a semantic relationship between the two ATs. In this paper, we introduce how to link heterogeneous lexical knowledge bases. We propose three strategies for type coercion based on the relatedness between the two ATs and answer candidates in this taxonomy. Finally, we demonstrate that this combination of individual type coercion creates a synergistic effect.

Concept-based Question Answering System

  • Kang Yu-Hwan;Shin Seung-Eun;Ahn Young-Min;Seo Young-Hoon
    • International Journal of Contents
    • /
    • v.2 no.1
    • /
    • pp.17-21
    • /
    • 2006
  • In this paper, we describe a concept-based question-answering system in which concept rather than keyword itself makes an important role on both question analysis and answer extraction. Our idea is that concepts occurred in same type of questions are similar, and if a question is analyzed according to those concepts then we can extract more accurate answer because we know the semantic role of each word or phrase in question. Concept frame is defined for each type of question, and it is composed of important concepts in that question type. Currently the number of question type is 79 including 34 types for person, 14 types for location, and so on. We experiment this concept-based approach about questions which require person s name as their answer. Experimental results show that our system has high accuracy in answer extraction. Also, this concept-based approach can be used in combination with conventional approaches.

  • PDF

Implementation of OMR Answer Paper Scoring Method Using Image Processing Method (영상처리기법을 활용한 OMR 답안지 채점방법의 구현)

  • Kwon, Hiok-Han;Hwang, Gi-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.3
    • /
    • pp.169-175
    • /
    • 2011
  • In this paper, an automatic scoring system of the OMR answer sheet is implemented using Gray Scale and image segmentation method. The proposed method was used to extract the OMR data on multiple-choice answer sheet from captured image. In addition, On-line scoring system is developed and implemented to mark the short-answer type on the reverse side. Therefore, teachers can mark the short-answer type for anytime and anywhere within the available time. There were many advantages to mark of the multiple-choice answer sheet without additional OMR reader. In the future, the grading of short-answer type will be more efficient if it were performed by using an automatic scoring system based on image processing.

Recognition of Answer Type for WiseQA (WiseQA를 위한 정답유형 인식)

  • Heo, Jeong;Ryu, Pum Mo;Kim, Hyun Ki;Ock, Cheol Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.7
    • /
    • pp.283-290
    • /
    • 2015
  • In this paper, we propose a hybrid method for the recognition of answer types in the WiseQA system. The answer types are classified into two categories: the lexical answer type (LAT) and the semantic answer type (SAT). This paper proposes two models for the LAT detection. One is a rule-based model using question focuses. The other is a machine learning model based on sequence labeling. We also propose two models for the SAT classification. They are a machine learning model based on multiclass classification and a filtering-rule model based on the lexical answer type. The performance of the LAT detection and the SAT classification shows F1-score of 82.47% and precision of 77.13%, respectively. Compared with IBM Watson for the performance of the LAT, the precision is 1.0% lower and the recall is 7.4% higher.

Research of Verifying the Remote Test Answer Sheets Authentication (원격시험 컴퓨터활용 답안지 진본성 검증에 관한 연구)

  • Park, Kee-Hong;Jang, Hae-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.3
    • /
    • pp.135-141
    • /
    • 2012
  • Development of the Internet has brought many changes in methods of education and assesment. When enforcing the on-line distance education, the tests to check the outcomes of the learning are taken on the Internet. The current trends of education evaluation are focused on the types of questions and the detachments of exam proctor but verifying the authentication of answer sheet. There are several forms to make answers; selection type, short-answer type, write-out answer type, practical exercise type, etc. All the forms can be done on the Internet except the practical exercise type because the source of the examinee's answer sheet is unreliable. In this paper, we made the verification system to solve the doubt by setting the proved information on the answer sheet. Putting the information down to confirm the authenticity during the exam on the server is distinct character of this system. After the test finished, the system will operate when examinee turn in the answer sheet.

A Study on Work Semantic Categories for Natural Language Question Type Classification and Answer Extraction (자연어 질의유형 판별과 응답 추출을 위한 어휘 의미 체계에 관한 연구)

  • Yoon Sung-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.6
    • /
    • pp.539-545
    • /
    • 2004
  • For question answering system that extracts an answer and output to user‘s natural language question, a process of question type classification from user’s natural language query is very important. This paper proposes a question and answer type classifier using the interrogatives and word semantic categories instead of complicated classifying rules and huge dictionaries. Synonyms and postfix information are also used for question type classification. Experiments show that the semantic categories are helpful for question type classifying without interrogatives.

  • PDF

Concept-based Question Analysis for Accurate Answer Extraction (정확한 해답 추출을 위한 개념 기반의 질의 분석)

  • Shin, Seung-Eun;Kang, Yu-Hwan;Ahn, Young-Min;Park, Hee-Guen;Seo, Young-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.10-20
    • /
    • 2007
  • This paper describes a concept-based question analysis to analyze concept which is more important than keyword for the accurate answer extraction. Our idea is that we can extract correct answers from various paragraphs with different structures when we use well-defined concepts because concepts occurred in questions of same answer type are similar. That is, we will analyze the syntactic and semantic role of each word or phrase in a question in order to extract more relevant documents and more accurate answer in them. For each answer type, we define a concept frame which is composed of concepts commonly occurred in that type of questions and analyze user's question by filling a concept frame with a word or phrase. Empirical results show that our concept-based question analysis can extract more accurate answer than any other conventional approach. Also, concept-based approach has additional merits that it is language universal model, and can be combined with arbitrary conventional approaches.

An Analysis of Errors in Describing Solving Process for High School Geometry and Vectors (고등학교 기하와 벡터 과목에서 풀이과정 서술의 오류 분석)

  • Hwang, Jae-woo;Boo, Deok Hoon
    • The Mathematical Education
    • /
    • v.56 no.1
    • /
    • pp.63-80
    • /
    • 2017
  • By analysing the examination papers from third grade high school students, we classified the errors occurred in the problem solving process of high school 'Geometry and Vectors' into several types. There are five main types - (A)Insufficient Content Knowledge, (B)Wrong Method, (C)Logical Invalidity, (D)Unskilled Expression and (E)Interference.. Type A and B lead to an incorrect answer, and type C and D cannot be distinguished by multiple-choice or closed answer questions. Some of these types are classified into subtypes - (B1)Incompletion, (B2)Omitted Condition, (B3)Incorrect Calculation, (C1)Non-reasoning, (C2)Insufficient Reasoning, (C3)Illogical Process, (D1)Arbitrary Symbol, (D2)Using a Character Without Explanation, (D3) Visual Dependence, (D4)Symbol Incorrectly Used, (D5)Ambiguous Expression. Based on the these types of errors, answers of each problem was analysed in detail, and proper ways to correct or prevent these errors were suggested case by case. When problems that were used in the periodical test were given again in descriptive forms, 67% of the students tried to answer, and 14% described flawlessly, despite that the percentage of correct answers were higher than 40% when given in multiple-choice form. 34% of the students who tried to answer have failed to have logical validity. 37% of the students who tried to answer didn't have enough skill to express. In lessons on curves of secondary degree, teachers should be aware of several issues. Students are easily confused between 'focus' and 'vertex', and between 'components of a vector' and 'coordinates of a point'. Students often use an undefined expression when mentioning a parallel translation. When using a character, students have to make sure to define it precisely, to prevent the students from making errors and to make them express in correct ways.

A Study on Clustering Query-answer Documents with Structural Features (문서구조를 이용한 질의응답문서 클러스터링에 관한 연구)

  • Choi, Sang-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.4
    • /
    • pp.105-118
    • /
    • 2005
  • As the number of users who ask and give answers in the query-answer documents retrieval system is growing exponentially, the query-answer document become a crucial information resource, as a new type of information retrieval service. A query-answer document Consists of three structural parts : a query, explanation on query, and answers Chosen by users who asked the query. To identify the role of each structural part in representing the topics of documents, the three structural parts were clustered automatically and the results of several clustering tests were compared in this study.

Collective Intelligence based Wrong Answer Note System (집단지성 기반 오답노트 시스템)

  • Ha, Jin Seog;Kim, Chang Suk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.457-463
    • /
    • 2015
  • This paper presents the need for the concept of collective intelligence based system for the timely learning and incorrect notes show the utilization and satisfaction. The old wrong answer note system is characterized by the provision of uniform right answer explanations for the questions whose answers were wrong by checking whether the evaluation items were answered right or wrong. The characteristic requires a lot of improvements in terms of wrong answer analysis and feedback since it cannot properly receive feedback on the items that a learner got right by luck in spite of poor understanding of them and on the errors in the selection process of wrong answers by individual learners. The SERO wrong answer note was designed to propose new ways to identify and capture such "score errors" and compensate for the practical weaknesses of learners. The Stability Emergency Risk Opportunity (SERO) wrong answer note is based on a method of categorizing and analyzing evaluation items answered by the examinee into four types (S, E, R and O type), and commentary correct as well as incorrect answers by presenting a variety of commentary notes using the collective intelligence of the study show that satisfaction is high.