• Title/Summary/Keyword: Automated English Scoring System

Search Result 11, Processing Time 0.024 seconds

Development of automated scoring system for English writing (영작문 자동 채점 시스템 개발 연구)

  • Jin, Kyung-Ae
    • English Language & Literature Teaching
    • /
    • v.13 no.1
    • /
    • pp.235-259
    • /
    • 2007
  • The purpose of the present study is to develop a prototype automated scoring system for English writing. The system was developed for scoring writings of Korean middle school students. In order to develop the automated scoring system, following procedures have been applied. First, review and analysis of established automated essay scoring systems in other countries have been accomplished. By doing so, we could get the guidance for development of a new sentence-level automated scoring system for Korean EFL students. Second, knowledge base such as lexicon, grammar and WordNet for natural language processing and error corpus of English writing of Korean middle school students were established. Error corpus was established through the paper and pencil test with 589 third year middle school students. This study provided suggestions for the successful introduction of an automated scoring system in Korea. The automated scoring system developed in this study should be continuously upgraded to improve the accuracy of the scoring system. Also, it is suggested to develop an automated scoring system being able to carry out evaluation of English essay, not only sentence-level evaluation. The system needs to be upgraded for the improved precision, but, it was a successful introduction of an sentence-level automated scoring system for English writing in Korea.

  • PDF

Developing an Automated English Sentence Scoring System for Middle-school Level Writing Test by Using Machine Learning Techniques (기계학습을 이용한 중등 수준의 단문형 영어 작문 자동 채점 시스템 구현)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.911-920
    • /
    • 2014
  • In this paper, we introduce an automatic scoring system for middle-school level writing test based on using machine learning techniques. We discuss overall process and features for building an automatic English writing scoring system. A "concept answer" which represents an abstract meaning of text is newly introduced in order to evaluate the elaboration of a student's answer. In this work, multiple machine learning algorithms are adopted for scoring English writings. We suggest a decision process "optimal combination" which optimally combines multiple outputs of machine learning algorithms and generates a final single output in order to improve the performance of the automatic scoring. By experiments with actual test data, we evaluate the performance of overall automated English writing scoring system.

Implementing Automated English Error Detecting and Scoring System for Junior High School Students (중학생 영작문 실력 향상을 위한 자동 문법 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.36-46
    • /
    • 2007
  • This paper presents an automated English scoring system designed to help non-native speakers of English, Korean-speaking learners in particular. The system is developed to help the 3rd grade students in junior high school improve their English grammar skills. Without human's efforts, the system identifies grammar errors in English sentences, provides feedback on the detected errors, and scores the sentences. Detecting grammar errors in the system requires implementing a special type of rules in addition to the rules to parse grammatical sentences. Error production rules are implemented to analyze ungrammatical sentences and recognize syntactic errors. The rules are collected from the junior high school textbooks and real student test data. By firing those rules, the errors are detected followed by setting corresponding error flags, and the system continues the parsing process without a failure. As the final step of the process, the system scores the student sentences based on the errors detected. The system is evaluated with real English test data produced by the students and the answers provided by human teachers.

Automated Scoring System for Korean Short-Answer Questions Using Predictability and Unanimity (기계학습 분류기의 예측확률과 만장일치를 이용한 한국어 서답형 문항 자동채점 시스템)

  • Cheon, Min-Ah;Kim, Chang-Hyun;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Song, Mi-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.527-534
    • /
    • 2016
  • The emergent information society requires the talent for creative thinking based on problem-solving skills and comprehensive thinking rather than simple memorization. Therefore, the Korean curriculum has also changed into the direction of the creative thinking through increasing short-answer questions that can determine the overall thinking of the students. However, their scoring results are a little bit inconsistency because scoring short-answer questions depends on the subjective scoring of human raters. In order to alleviate this point, an automated scoring system using a machine learning has been used as a scoring tool in overseas. Linguistically, Korean and English is totally different in the structure of the sentences. Thus, the automated scoring system used in English cannot be applied to Korean. In this paper, we introduce an automated scoring system for Korean short-answer questions using predictability and unanimity. We also verify the practicality of the automatic scoring system through the correlation coefficient between the results of the automated scoring system and those of human raters. In the experiment of this paper, the proposed system is evaluated for constructed-response items of Korean language, social studies, and science in the National Assessment of Educational Achievement. The analysis was used Pearson correlation coefficients and Kappa coefficient. Results of the experiment had showed a strong positive correlation with all the correlation coefficients at 0.7 or higher. Thus, the scoring results of the proposed scoring system are similar to those of human raters. Therefore, the automated scoring system should be found to be useful as a scoring tool.

A comparison of grammatical error detection techniques for an automated english scoring system

  • Lee, Songwook;Lee, Kong Joo
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.37 no.7
    • /
    • pp.760-770
    • /
    • 2013
  • Detecting grammatical errors from a text is a long-history application. In this paper, we compare the performance of two grammatical error detection techniques, which are implemented as a sub-module of an automated English scoring system. One is to use a full syntactic parser, which has not only grammatical rules but also extra-grammatical rules in order to detect syntactic errors while paring. The other one is to use a finite state machine which can identify an error covering a small range of an input. In order to compare the two approaches, grammatical errors are divided into three parts; the first one is grammatical error that can be handled by both approaches, and the second one is errors that can be handled by only a full parser, and the last one is errors that can be done only in a finite state machine. By doing this, we can figure out the strength and the weakness of each approach. The evaluation results show that a full parsing approach can detect more errors than a finite state machine can, while the accuracy of the former is lower than that of the latter. We can conclude that a full parser is suitable for detecting grammatical errors with a long distance dependency, whereas a finite state machine works well on sentences with multiple grammatical errors.

Building an Automated Scoring System for a Single English Sentences (단문형의 영작문 자동 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo;Jin, Kyung-Ae
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.223-230
    • /
    • 2007
  • The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

Integration of Computerized Feedback to Improve Interactive Use of Written Feedback in English Writing Class

  • CHOI, Jaeho
    • Educational Technology International
    • /
    • v.12 no.2
    • /
    • pp.71-94
    • /
    • 2011
  • How can an automated essay scoring (AES) program, which provides feedback for essays, be a formative tool for improving ESL writing? In spite of the increasing demands for English writing proficiency, English writing instruction has not been effective for teaching and learning because of a lack of timely and accurate feedback. In this context, AES as a possible solution has been gaining the attention of educators and scholars in ESL/EFL writing education because it can provide consistent and prompt feedback for student writers. This experimental study examined the impact of different types of feedback for a college ESL writing program using the Criterion AES system. The results reveal the positive impact of AES in a college-level ESL course and differences between the teacher's feedback and the AES feedback. The findings suggest that AES can be effectively integrated into ESL writing instruction as a formative assessment tool.

Accuracy Improvement of an Automated Scoring System through Removing Duplicately Reported Errors (영작문 자동 채점 시스템에서의 중복 보고 오류 제거를 통한 성능 향상)

  • Lee, Hyun-Ah;Kim, Jee-Eun;Lee, Kong-Joo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.173-180
    • /
    • 2009
  • The purpose of developing an automated scoring system for English composition is to score English writing tests and to give diagnostic feedback to the test-takers without human's efforts. The system developed through our research detects grammatical errors of a single sentence on morphological, syntactic and semantic stages, respectively, and those errors are calculated into the final score. The error detecting stages are independent from one another, which causes duplicating the identical errors with different labels at different stages. These duplicated errors become a hindering factor to calculating an accurate score. This paper presents a solution to detecting the duplicated errors and improving an accuracy in calculating the final score by eliminating one of the errors.

The Type of English Writing Error of Korean Undergraduate Students (한국 대학생이 보이는 영어작문 실수 유형)

  • Lim Heesuck;Park Chongwon;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.176-179
    • /
    • 2003
  • This study was conducted to extract the feature set of English writing error for suggesting adequate English writing program and making automated scoring system. The frequent committed error and the error across the level of writing proficiency were reported. Also, It is reported that the correlation between type of error and native speaker's rating score.

  • PDF

Context-sensitive Word Error Detection and Correction for Automatic Scoring System of English Writing (영작문 자동 채점 시스템을 위한 문맥 고려 단어 오류 검사기)

  • Choi, Yong Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.45-56
    • /
    • 2015
  • In this paper, we present a method that can detect context-sensitive word errors and generate correction candidates. Spelling error detection is one of the most widespread research topics, however, the approach proposed in this paper is adjusted for an automated English scoring system. A common strategy in context-sensitive word error detection is using a pre-defined confusion set to generate correction candidates. We automatically generate a confusion set in order to consider the characteristics of sentences written by second-language learners. We define a word error that cannot be detected by a conventional grammar checker because of part-of-speech ambiguity, and propose how to detect the error and generate correction candidates for this kind of error. An experiment is performed on the English writings composed by junior-high school students whose mother tongue is Korean. The f1 value of the proposed method is 70.48%, which shows that our method is promising comparing to the current-state-of-the art.