• Title/Summary/Keyword: Korean College Scholastic Ability Test (KCSAT)

Search Result 1, Processing Time 0.019 seconds

Analysis of the scholastic capability of ChatGPT utilizing the Korean College Scholastic Ability Test (대학입시 수능시험을 평가 도구로 적용한 ChatGPT의 학업 능력 분석)

  • WEN HUILIN;Kim Jinhyuk;Han Kyonghee;Kim Shiho
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.72-83
    • /
    • 2023
  • ChatGPT, commercial launch in late 2022, has shown successful results in various professional exams, including US Bar Exam and the United States Medical Licensing Exam (USMLE), demonstrating its ability to pass qualifying exams in professional domains. However, further experimentation and analysis are required to assess ChatGPT's scholastic capability, such as logical inference and problem-solving skills. This study evaluated ChatGPT's scholastic performance utilizing the Korean College Scholastic Ability Test (KCSAT) subjects, including Korean, English, and Mathematics. The experimental results revealed that ChatGPT achieved a relatively high accuracy rate of 69% in the English exam but relatively lower rates of 34% and 19% in the Korean Language and Mathematics domains, respectively. Through analyzing the results of the Korean language exam, English exams, and TOPIK II, we evaluated ChatGPT's strengths and weaknesses in comprehension and logical inference abilities. Although ChatGPT, as a generative language model, can understand and respond to general Korean, English, and Mathematics problems, it is considered weak in tasks involving higher-level logical inference and complex mathematical problem-solving. This study might provide simple yet accurate and effective evaluation criteria for generative artificial intelligence performance assessment through the analysis of KCSAT scores.

  • PDF