DOI QR코드

DOI QR Code

DINA 모형에서 응시생 분류 정확성에 영향을 미치는 요인 탐구 : 응시생 분류방법을 중심으로

A Study on the Factors Affecting Examinee Classification Accuracy under DINA Model : Focused on Examinee Classification Methods

  • Kim, Ji-Hyo (Division of Education, Chungnam National University)
  • 투고 : 2013.07.24
  • 심사 : 2013.08.07
  • 발행 : 2013.08.31

초록

본 연구의 목적은 DINA(deterministic-input, noisy "and" gate)모형에서 최대우도(maximum likelihood: ML), 최대사후확률(maximum a posteriori: MAP), 사후기대(expected a posteriori: EAP)방법들의 분류 정확성이 어느 정도인가를 알아보는 것이다. 연구 목적을 달성하기 위하여 다양한 모의실험 조건들[인지요소의 수(K= 5, 7), 응시생 능력분포(고능력, 중간능력, 저능력 집단), 검사 길이(J= 15, 30, 45)]에 따라 모의자료를 생성했다. 응시생 분류 정확성을 평가하기 위한 준거로 참 인지요소(true ${\alpha}$)와 ML, MAP, EAP방법으로 추정된 인지요소가 어느 정도 일치하는지를 계산했다. 본 연구의 주요결과를 요약하면 다음과 같다. 첫째, 본 연구에서 설정한 검사 조건에서 ML, MAP방법보다 EAP방법의 정확일치도 평균이 높았다. 둘째, 다른 검사 조건이 동일할 때, 인지요소의 수가 증가하면 ML, MAP, EAP방법 모두에서 정확일치도 평균이 낮아졌다. 셋째, 동일한 검사 길이에서 사전분포로 고능력, 중간능력, 저능력 집단을 각각 가정했을 때 ML, MAP방법보다 EAP방법의 정확일치도 평균이 높았다. 넷째, 동일한 응시생 능력분포에서 검사 길이가 증가하면 ML, MAP, EAP방법 모두에서 정확일치도 평균이 높아졌다. 인지요소의 수에 따라 응시생을 정확하게 분류하기 위한 적절한 검사 길이를 보면, 인지요소의 수가 5, 7개이고 이에 대응하는 검사 길이가 각각 30, 45문항일 때 본 연구에서 설정한 높은 분류 정확성 기준에 부합하는 것으로 나타났다.

The purpose of this study was to examine the classification accuracies of ML, MAP, and EAP methods under DINA model. For this purpose, this study examined the classification accuracies of the classification methods under the various conditions: the number of attributes, the ability distribution of examinees, and test length. To accomplish this purpose, this study used a simulation method. For the simulation study, data was simulated under the various simulation conditions including the number of attributes (K= 5, 7), the ability distribution of examinees (high, middle, low), and test length (J= 15, 30, 45). Additionally, the percent of agreements between true skill patterns(true ${\alpha}$) and skill patterns estimated by the ML, MAP, and EAP methods were calculated. The summary of the main results of this study is as follows: First, When the number of attributes was 5 and 7, the EAP method showed relatively higher average in the percent of exact agreement than the ML and MAP methods. Second, under the same conditions, as the number of attributes increased, the average percent of exact agreement decreased in ML, MAP, and EAP methods. Third, when the prior distribution of examinees ability was different from low to high under the conditions of the same test length, the EAP method showed relatively higher average in the percent of exact agreement than those of the ML and MAP methods. Fourth, the average percent of exact agreement increased in all methods, ML, MAP, and EAP when the test length increased from 15 to 30 and 45 under the conditions of the same the ability distribution of examinees.

키워드

참고문헌

  1. Ban, J, C, Kim, S (2012). A Comparison of Skill Mastery Estimation Methods for DINA and Changes of Skill Masteries of Elementary Students. Journal of Educational Evaluation, 25(4), 721-744.
  2. Cheng, Y. (2009). When cognitive diagnosis meets computerized adaptive testing: CD-CAT. Psychometrika, 74(4), 619-632. DOI: http://dx.doi.org/10.1007/s11336-009-9123-2
  3. Chiu, C. Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74(4), 633-665. DOI: http://dx.doi.org/10.1007/s11336-009-9125-0
  4. Cui, Y., Gierl, M. J., & Chang, H. H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49(1), 19-38. DOI: http://dx.doi.org/10.1111/j.1745-3984.2011.00158.x
  5. de la Torre, J. (2008). An empirically-based method of Q-matrix validation for the DINA model: Development and applications. Journal of Educational Measurement, 45(4), 343-363. DOI: http://dx.doi.org/10.1111/j.1745-3984.2008.00069.x
  6. de la Torre, J. (2009). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34(1), 115-130. DOI: http://dx.doi.org/10.3102/1076998607309474
  7. de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76(2), 179-199. DOI: http://dx.doi.org/10.1007/s11336-011-9207-7
  8. de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69(3), 333-353. DOI: http://dx.doi.org/10.1007/BF02295640
  9. de la Torre, J., Hong, Y., & Deng, W. (2010). Factors affecting the item parameter estimation and classification accuracy of the DINA model. Journal of Educational Measurement, 47(2), 227-249. DOI: http://dx.doi.org/10.1111/j.1745-3984.2010.00110.x
  10. DiBello, L. V., Stout, W. F., & Roussos, L. A. (1995). Unified cognitive psychometric diagnostic assessment likelihood-based classification techniques. In P. D. Nichols, S. F. Chipman, & R. L. Brennan (Eds.), Cognitively Diagnostic Assessment(pp. 361-390). Hillsdale, NJ: Lawrence Erlbaum Associates.
  11. Fu, J., & Li, Y. (2007, April). An integrative review of cognitively diagnostic psychometric models. Paper presented at the annual meeting of the National Council on Measurement in Education. Chicago, IL.
  12. Hartz, S. (2002). A Bayesian framework for the Unified Model for assessing cognitive abilities: Blending theory with practice. Doctoral thesis, The University of Illinois at Urbana-Champaign.
  13. Henson, R. A., & Templin, J. L. (2009, April). Q-matrix construction. Paper presented at 2009 NCME training session. San Francisco, CA.
  14. Huebner, A., & Wang, B. (2011). A note on comparing examinee classification methods for cognitive diagnosis models. Educational and Psychological Measurement, 71(2), 407-419. DOI: http://dx.doi.org/10.1177/0013164410388832
  15. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3), 258-272. DOI: http://dx.doi.org/10.1177/01466210122032064
  16. Kang, T, H, Park, C, H, & Kim, I, S (2011) The effects of test length and performance-level number on classification consistency and accuracy. Journal of Educational Evaluation, 24(4), 1017-1038.
  17. Kim, H, K, Han, J, A, Choi, S, G, Kim, B, M (2012). The Application of Cognitive Diagnostic Model for Achievement Profile Analysis and Score Report(RRE 2012-7). Seoul: Korea Institute for Curriculum and Evaluation.
  18. Kim, S, H, Kim, S, J, &, Song, M, Y (2008). Using Cognitive Diagnosis Theory to Analyze the Test Results of Mathematics. Journal of Korea Society of Educational Studies in Mathematics School Mathematics. 10(2), 259-277.
  19. Kim, S, E, Park, Y, S, & Lee, Y, S (2012). Application of Latent Class Model to Multiple Strategy CDM Analysis. Journal of Educational Evaluation, 25(1), 49-68.
  20. Kim, S, H, Song, M, Y (2011). Diagnosis of knowledge states using large scale assessments: An application of DINA model. The Journal of Curriculum and Evaluation, 14(1), 177-200.
  21. Kunina, O., Rupp, A. A., & Wilhelm, O. (2008, June 29-July 2). Convergence of skill profiles for cognitive diagnosis models and other multidimensional scaling approaches: An empirical illustration with a diagnostic mathematics assessment. Paper presented at the annual international Meeting of the Psychometric Society, Druham, NH.
  22. Lee, J, S (2009). Present Educational evaluation. Seoul: Kyoyookgwahaksa.
  23. Lee, Y, S, Park, Y, S, Song, M, Y, Kim, S, E, Lee, Y, J, In, B, R (2012). Investigating Score Reporting of Attribute Profiles from the National Assessment of Educational Achievement using Cognitive Diagnostic Models. Journal of Educational Evaluation, 25(3), 411-433.
  24. R Development Core Team. (2012). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org.
  25. Rupp, A. A., & Templin, J. L. (2008a). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68, 78-96. DOI: http://dx.doi.org/10.1177/0013164407301545
  26. Rupp, A. A., Templin, J. L., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods, and application. NY: The Guilford press.
  27. Song, M, Y, Lee, Y, S, & Park, Y, S (2011). Analysis and score reporting based on cognitive diagnostic models using the National Assessment of Educational Achievement(RRE 2011-8). Seoul: Korea Institute for Curriculum and Evaluation.
  28. Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 435-354. DOI: http://dx.doi.org/10.1111/j.1745-3984.1983.tb00212.x
  29. Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods. 11(3), 287-305. DOI: http://dx.doi.org/10.1037/1082-989X.11.3.287