• Title/Summary/Keyword: 영어 자동 채점

Search Result 15, Processing Time 0.029 seconds

Developing an Automated English Sentence Scoring System for Middle-school Level Writing Test by Using Machine Learning Techniques (기계학습을 이용한 중등 수준의 단문형 영어 작문 자동 채점 시스템 구현)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.911-920
    • /
    • 2014
  • In this paper, we introduce an automatic scoring system for middle-school level writing test based on using machine learning techniques. We discuss overall process and features for building an automatic English writing scoring system. A "concept answer" which represents an abstract meaning of text is newly introduced in order to evaluate the elaboration of a student's answer. In this work, multiple machine learning algorithms are adopted for scoring English writings. We suggest a decision process "optimal combination" which optimally combines multiple outputs of machine learning algorithms and generates a final single output in order to improve the performance of the automatic scoring. By experiments with actual test data, we evaluate the performance of overall automated English writing scoring system.

Implementing Automated English Error Detecting and Scoring System for Junior High School Students (중학생 영작문 실력 향상을 위한 자동 문법 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.36-46
    • /
    • 2007
  • This paper presents an automated English scoring system designed to help non-native speakers of English, Korean-speaking learners in particular. The system is developed to help the 3rd grade students in junior high school improve their English grammar skills. Without human's efforts, the system identifies grammar errors in English sentences, provides feedback on the detected errors, and scores the sentences. Detecting grammar errors in the system requires implementing a special type of rules in addition to the rules to parse grammatical sentences. Error production rules are implemented to analyze ungrammatical sentences and recognize syntactic errors. The rules are collected from the junior high school textbooks and real student test data. By firing those rules, the errors are detected followed by setting corresponding error flags, and the system continues the parsing process without a failure. As the final step of the process, the system scores the student sentences based on the errors detected. The system is evaluated with real English test data produced by the students and the answers provided by human teachers.

Building an Automated Scoring System for a Single English Sentences (단문형의 영작문 자동 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo;Jin, Kyung-Ae
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.223-230
    • /
    • 2007
  • The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

Automated Scoring System for Korean Short-Answer Questions Using Predictability and Unanimity (기계학습 분류기의 예측확률과 만장일치를 이용한 한국어 서답형 문항 자동채점 시스템)

  • Cheon, Min-Ah;Kim, Chang-Hyun;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Song, Mi-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.527-534
    • /
    • 2016
  • The emergent information society requires the talent for creative thinking based on problem-solving skills and comprehensive thinking rather than simple memorization. Therefore, the Korean curriculum has also changed into the direction of the creative thinking through increasing short-answer questions that can determine the overall thinking of the students. However, their scoring results are a little bit inconsistency because scoring short-answer questions depends on the subjective scoring of human raters. In order to alleviate this point, an automated scoring system using a machine learning has been used as a scoring tool in overseas. Linguistically, Korean and English is totally different in the structure of the sentences. Thus, the automated scoring system used in English cannot be applied to Korean. In this paper, we introduce an automated scoring system for Korean short-answer questions using predictability and unanimity. We also verify the practicality of the automatic scoring system through the correlation coefficient between the results of the automated scoring system and those of human raters. In the experiment of this paper, the proposed system is evaluated for constructed-response items of Korean language, social studies, and science in the National Assessment of Educational Achievement. The analysis was used Pearson correlation coefficients and Kappa coefficient. Results of the experiment had showed a strong positive correlation with all the correlation coefficients at 0.7 or higher. Thus, the scoring results of the proposed scoring system are similar to those of human raters. Therefore, the automated scoring system should be found to be useful as a scoring tool.

Scoring Korean Written Responses Using English-Based Automated Computer Scoring Models and Machine Translation: A Case of Natural Selection Concept Test (영어기반 컴퓨터자동채점모델과 기계번역을 활용한 서술형 한국어 응답 채점 -자연선택개념평가 사례-)

  • Ha, Minsu
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.3
    • /
    • pp.389-397
    • /
    • 2016
  • This study aims to test the efficacy of English-based automated computer scoring models and machine translation to score Korean college students' written responses on natural selection concept items. To this end, I collected 128 pre-service biology teachers' written responses on four-item instrument (total 512 written responses). The machine translation software (i.e., Google Translate) translated both original responses and spell-corrected responses. The presence/absence of five scientific ideas and three $na{\ddot{i}}ve$ ideas in both translated responses were judged by the automated computer scoring models (i.e., EvoGrader). The computer-scored results (4096 predictions) were compared with expert-scored results. The results illustrated that no significant differences in both average scores and statistical results using average scores was found between the computer-scored result and experts-scored result. The Pearson correlation coefficients of composite scores for each student between computer scoring and experts scoring were 0.848 for scientific ideas and 0.776 for $na{\ddot{i}}ve$ ideas. The inter-rater reliability indices (Cohen kappa) between computer scoring and experts scoring for linguistically simple concepts (e.g., variation, competition, and limited resources) were over 0.8. These findings reveal that the English-based automated computer scoring models and machine translation can be a promising method in scoring Korean college students' written responses on natural selection concept items.

Context-sensitive Word Error Detection and Correction for Automatic Scoring System of English Writing (영작문 자동 채점 시스템을 위한 문맥 고려 단어 오류 검사기)

  • Choi, Yong Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.45-56
    • /
    • 2015
  • In this paper, we present a method that can detect context-sensitive word errors and generate correction candidates. Spelling error detection is one of the most widespread research topics, however, the approach proposed in this paper is adjusted for an automated English scoring system. A common strategy in context-sensitive word error detection is using a pre-defined confusion set to generate correction candidates. We automatically generate a confusion set in order to consider the characteristics of sentences written by second-language learners. We define a word error that cannot be detected by a conventional grammar checker because of part-of-speech ambiguity, and propose how to detect the error and generate correction candidates for this kind of error. An experiment is performed on the English writings composed by junior-high school students whose mother tongue is Korean. The f1 value of the proposed method is 70.48%, which shows that our method is promising comparing to the current-state-of-the art.

An English Essay Scoring System Based on Grammaticality and Lexical Cohesion (문법성과 어휘 응집성 기반의 영어 작문 평가 시스템)

  • Kim, Dong-Sung;Kim, Sang-Chul;Chae, Hee-Rahk
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.223-255
    • /
    • 2008
  • In this paper, we introduce an automatic system of scoring English essays. The system is comprised of three main components: a spelling checker, a grammar checker and a lexical cohesion checker. We have used such resources as WordNet, Link Grammar/parser and Roget's thesaurus for these components. The usefulness of an automatic scoring system depends on its reliability. To measure reliability, we compared the results of automatic scoring with those of manual scoring, on the basis of the Kappa statistics and the Multi-facet Rasch Model. The statistical data obtained from the comparison showed that the scoring system is as reliable as professional human graders. This system deals with textual units rather than sentential units and checks not only formal properties of a text but also its contents.

  • PDF

Swear Word Detection and Unknown Word Classification for Automatic English Writing Assessment (영작문 자동평가를 위한 비속어 검출과 미등록어 분류)

  • Lee, Gyoung;Kim, Sung Gwon;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.381-388
    • /
    • 2014
  • In this paper, we deal with implementation issues of an unknown word classifier for middle-school level English writing test. We define the type of unknown words occurred in English text and discuss the detection process for unknown words. Also, we define the type of swear words occurred in students's English writings, and suggest how to handle this type of words. We implement an unknown word classifier with a swear detection module for developing an automatic English writing scoring system. By experiments with actual test data, we evaluate the accuracy of the unknown word classifier as well as the swear detection module.

Proposal of Automated Essay Scoring Method based on Deep-Learning (딥러닝 기반의 에세이 자동 평가 방법 제안)

  • Kim, Yujin;Park, Chanjun;Lee, Seolhwa;Lim, HeuiSeok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.384-390
    • /
    • 2021
  • 본 논문은 영어 에세이 자동 평가를 위한 딥러닝 기반의 새로운 평가 방법론을 제안한다. 어휘, 형태소, 구문, 의미 단계로 이루어진 평가 과정을 통해 자동화된 에세이 평가가 가능하다. 제안하는 방법의 객관성과 신뢰성을 검증하기 위하여 사람이 평가한 점수와 각 단계별 점수 사이의 상관관계 분석을 진행하였으며, 그 결과 제안하는 평가 방법이 유의미함을 알 수 있었다.

  • PDF

Accuracy Improvement of an Automated Scoring System through Removing Duplicately Reported Errors (영작문 자동 채점 시스템에서의 중복 보고 오류 제거를 통한 성능 향상)

  • Lee, Hyun-Ah;Kim, Jee-Eun;Lee, Kong-Joo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.173-180
    • /
    • 2009
  • The purpose of developing an automated scoring system for English composition is to score English writing tests and to give diagnostic feedback to the test-takers without human's efforts. The system developed through our research detects grammatical errors of a single sentence on morphological, syntactic and semantic stages, respectively, and those errors are calculated into the final score. The error detecting stages are independent from one another, which causes duplicating the identical errors with different labels at different stages. These duplicated errors become a hindering factor to calculating an accurate score. This paper presents a solution to detecting the duplicated errors and improving an accuracy in calculating the final score by eliminating one of the errors.