• Title/Summary/Keyword: 수학능력시험

Search Result 165, Processing Time 0.022 seconds

Test of randomness for answers arrangement in 2017 College Scholastic Ability Test (2017학년도 대학수학능력시험 영역별 정답배열 임의성 검정)

  • Ahn, Sojin;Lee, Jae Eun;Jang, Dae-Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.4
    • /
    • pp.503-512
    • /
    • 2017
  • In test with multiple choices, it is necessary to have the position of correct answers of each question spreaded evenly over all the questions in order to minimize the influence of answering tendency of test takers with preference to specific position of multiple choices. The scores of tests with correct answers in specific positions would not reflect exactly the academic aptitude of examinees who do not know correct answers but have the biased answering tendency. In this paper, we have randomness test for the positioning of correct answers at the 2017 College Scholastic Ability Test (CSAT) using Bartels rank test, the Wald-Wolfowitz runs test, the turning point test, the Cox Stuart trend test, the difference sign test and the Mann-Kendall tank test, etc. We also do independence test between the location of correct answer and the allocation of score in each question, for it may result in overestimating the test-taker with specific position preference in marking correct answers.

An Analysis on the Correlation between the College Scholastic Ability Test and the Mathematics Level Assessment (대학수학능력시험과 수학진단평가의 상관관계 분석)

  • Son, Min Ji;Pyo, Yong-Soo
    • East Asian mathematical journal
    • /
    • v.30 no.4
    • /
    • pp.493-507
    • /
    • 2014
  • The purpose of this thesis is to understand the relationship between the College Scholastic Ability Test(CSAT) and the Mathematics Level Assessment(MLA) which is conducted in P University. There are high correlations in grades, subject areas and college entrance types between grades of mathematics B-type of the CSAT and scores of the MLA. However, the many students showed substantial differences between the grades in the two tests. On the basis of these analysis results, we suggest plans for improving the implementation of the MLA and the teaching-learning methods about College General Mathematics.

Item Analysis of the 'Basic course of Information Technology' - Vocational Education Section in the College Scholastic Ability Test- ('정보 기술 기초' 교과의 문항 분석 - 대학수학능력시험 직업탐구영역을 중심으로-)

  • Kim, Jong-Hye;Kim, Ji-Hyun;kim, Yong;Lee, Won-Gyu
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.4
    • /
    • pp.39-49
    • /
    • 2007
  • The purpose of this study is to provide analysis resources to develop high standard questions by analyzing item characteristics and item usability of 'Basic course of Information Technology' in the College Scholastic Ability Test, For the qualitative research, this paper analyzed content validity. For the quantitative research, this paper analyzed item difficulty, item discrimination, item reliability, and distracters. As a result of analyzing tests in 2005 and 2006, questions were equally extracted from educational contents. However, the standard of questions were in need of revision. The development of high quality contents in Vocational Education Section was needed in order to meet to the College Scholastic Ability Test standards. Therefore, it is required to develop various difficulties and acceptable distinguishable questions.

  • PDF

Verification of educational goal of reading area in Korean SAT through natural language processing techniques (대학수학능력시험 독서 영역의 교육 목표를 위한 자연어처리 기법을 통한 검증)

  • Lee, Soomin;Kim, Gyeongmin;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.81-88
    • /
    • 2022
  • The major educational goal of reading part, which occupies important portion in Korean language in Korean SAT, is to evaluated whether a given text can be fully understood. Therefore given questions in the exam must be able to solely solvable by given text. In this paper we developed a datatset based on Korean SAT's reading part in order to evaluate whether a deep learning language model can classify if the given question is true or false, which is a binary classification task in NLP. In result, by applying language model solely according to the passages in the dataset, we were able to acquire better performance than 59.2% in F1 score for human performance in most of language models, that KoELECTRA scored 62.49% in our experiment. Also we proved that structural limit of language models can be eased by adjusting data preprocess.

대학수학능력 시험의 2~7차 실험평가 수리영역에 관한 문항분석

  • 임형
    • The Mathematical Education
    • /
    • v.32 no.3
    • /
    • pp.220-243
    • /
    • 1993
  • 고전검사이론과 문항반응이론을 사용하여 2-7차 실험정가 수리영역문항을 분석하였다. 2차부터 7차에 걸친 시험은 피험자에게 너무 어려웠으며 횟수를 거듭함에 따라 점점 더 어려워진 것으로 나타났다. 5차 실험평가 문항분석결과에서 문항 10과 15는 검토가 필요한 문항으로 나타났다. 그리고 7차 실험평가 문항분석결과에서 문항 11과 17은 검토가 필요한 문항으로 나타났다.

  • PDF

Analyzing Mathematical Performances of ChatGPT: Focusing on the Solution of National Assessment of Educational Achievement and the College Scholastic Ability Test (ChatGPT의 수학적 성능 분석: 국가수준 학업성취도 평가 및 대학수학능력시험 수학 문제 풀이를 중심으로)

  • Kwon, Oh Nam;Oh, Se Jun;Yoon, Jungeun;Lee, Kyungwon;Shin, Byoung Chul;Jung, Won
    • Communications of Mathematical Education
    • /
    • v.37 no.2
    • /
    • pp.233-256
    • /
    • 2023
  • This study conducted foundational research to derive ways to use ChatGPT in mathematics education by analyzing ChatGPT's responses to questions from the National Assessment of Educational Achievement (NAEA) and the College Scholastic Ability Test (CSAT). ChatGPT, a generative artificial intelligence model, has gained attention in various fields, and there is a growing demand for its use in education as the number of users rapidly increases. To the best of our knowledge, there are very few reported cases of educational studies utilizing ChatGPT. In this study, we analyzed ChatGPT 3.5 responses to questions from the three-year National Assessment of Educational Achievement and the College Scholastic Ability Test, categorizing them based on the percentage of correct answers, the accuracy of the solution process, and types of errors. The correct answer rates for ChatGPT in the National Assessment of Educational Achievement and the College Scholastic Ability Test questions were 37.1% and 15.97%, respectively. The accuracy of ChatGPT's solution process was calculated as 3.44 for the National Assessment of Educational Achievement and 2.49 for the College Scholastic Ability Test. Errors in solving math problems with ChatGPT were classified into procedural and functional errors. Procedural errors referred to mistakes in connecting expressions to the next step or in calculations, while functional errors were related to how ChatGPT recognized, judged, and outputted text. This analysis suggests that relying solely on the percentage of correct answers should not be the criterion for assessing ChatGPT's mathematical performance, but rather a combination of the accuracy of the solution process and types of errors should be considered.