• Title/Summary/Keyword: item discrimination

Search Result 124, Processing Time 0.026 seconds

An Item Characteristic Analysis of Competency Inventory for Designer via Generalized Partial Credit Mode (일반화부분점수 모형에 의한 디자인역량 검사 특성 분석)

  • LEE, Dae-Yong
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.27 no.6
    • /
    • pp.1546-1555
    • /
    • 2015
  • This study was performed to analyze the item characteristics of competency inventory for designer (CID), which Gil (2011) developed for measurement of design competency. To accomplish the purpose of this study, general partial credit (GPC) model based on polytomous item response theory was applied. The findings were as follows: First, CID is a reliable instrument for measuring design competency. Second, most items of CID have low item discrimination and average item difficulty according to the GPC model. Especially, there are some problems to have low item discrimination in view of validation. To improve the goodness of CID, we will need to examine why CID has low item discrimination.

Item Analysis using Classical Test Theory and Item Response Theory, Validity and Reliability of the Korean version of a Pressure Ulcer Prevention Knowledge (한국어판 욕창예방지식도구의 고전검사이론과 문항반응이론을 적용한 문항분석, 타당도와 신뢰도)

  • Kang, Myung Ja;Kim, Myoung Soo
    • Journal of Korean Biological Nursing Science
    • /
    • v.20 no.1
    • /
    • pp.11-19
    • /
    • 2018
  • Purpose: The purposes of this study were to perform items analysis using the classical test theory (CTT) and the item response theory (IRT), and to establish the validity and reliability of the Korean version of pressure ulcer prevention knowledge. Methods: The 26-item pressure ulcer prevention knowledge instrument was translated into Korean, and the item analysis of the 22 items having an adequate content validity index (CVI), was conducted. A total of 240 registered nurses in 2 university hospitals completed the questionnaire. Each item was analyzed applying CTT and IRT according to 2-parameter logistic model. Response alternatives quality, item difficulty and item discrimination were evaluated. For testing validity and reliability, Pearson correlation coefficient and Kuder Richardson-20 (KR-20) were used. Results: Scale CVI was .90 (Item-CVI range= .75-1.00). The total correct answer rate for this study population was relatively low as 52.5%. The quality of response alternatives was found to be relatively good (range= .02-.83). The item difficulty of the questions ranged form .10 to .86 according to CTT and -12.19 to 29.92 according to the IRT. This instrument had 12-low, 2-medium and 8-high item difficulty applying IRT. The values for the item discrimination ranged .04-.57 applying CTT and .00-1.47 applying IRT. And overall internal consistency (KR-20) was .62 and stability (test-retest) was .82. Conclusion: The instrument had relatively weak construct validity, item discrimination according to the IRT. Therefore, the cautious usage of a Korean version of this instrument would be recommended for discrimination because there are so many attractive response alternatives and low internal consistency.

Developing the Parent Play Interaction Observation Scale (PPIOS) for Toddlers (부모-영아 놀이 상호작용 관찰척도 개발을 위한 연구)

  • JiYeon Kim;MyoungSoon Kim;ShinHee Lee;JeongWon Park
    • Korean Journal of Childcare and Education
    • /
    • v.19 no.6
    • /
    • pp.39-54
    • /
    • 2023
  • Objective: This study aimed to develop a parent play interaction observation scale (PPIOS-Toddler) and analyze it in terms of item discrimination, reliability, and validity. Methods: The subjects of the study were 97 toddlers and mothers. This scale consisted of three categories, six domains and 22 items on a 5-point scale. For the item discrimination of the observation scale, an independent standard t-test was conducted to analyze the significant difference in average between the upper and lower groups for each item. The reliability of the observation scale was calculated by Cronbach's α, the intra-item agreement, and the validity was examined through content validity, the correlation between subdomains and total scores, and official validity using PICCOLO. Results: In item discrimination analysis, all items exhibited differences between upper and lower groups. The overall internal agreement for the observation scale was 0.95, with factor-specific internal agreement ranging from 0.83 to 0.95. The observation scale demonstrated notable correlations between total scores and sub-factors (0.45 to 0.93) and significant correlations with PICCOLO total scores (0.66 to 0.86). Conclusion/Implications: The study successfully verified the item discrimination, reliability, and validity of the Parent Play Interaction Observation Scale (PPIOS-Toddler).

A study on the improvement of the test items in Korean scholastic ability test (English test) (대학수학능력시험(영어시험)의 문항개선에 대한 연구)

  • Jeon, Sung-Ae
    • English Language & Literature Teaching
    • /
    • v.18 no.2
    • /
    • pp.189-211
    • /
    • 2012
  • The purpose of the study was to explore ways to improve the test items on the Korean scholastic ability test. More specifically, the researchers investigated whether use of the target language in test items would make a difference in total scores, discriminatory power, and item difficulty. A total of 288 high school seniors participated in the study. The subjects were divided into the experimental group (N=145) and the control group (N=143). A 25-item test resembling the Korean scholastic ability test was administered to both groups. The experimental group was given items whose questions and alternatives were all presented in English, whereas the control group was given items whose questions and alternatives were presented in Korean only. Statistical analyses revealed that use of English vs. Korean in the questions and alternatives made a significant difference in total scores, item discrimination, and item difficulty level. The findings strongly suggest that use of English is one way to improve the quality of the Korean scholastic ability test by enhancing item discrimination and face validity. Considering that the test in question is a high-stakes exam in Korea, further research on how to improve the Korean scholastic ability test is urgently called for.

  • PDF

Study on the herbology test items in Korean medicine education using Item Response Theory (문항반응이론을 활용한 한의학 교육에서 본초학 시험문항에 대한 연구)

  • Chae, Han;Han, Sang Yun;Yang, GiYoung;Kim, Hyungwoo
    • The Korea Journal of Herbology
    • /
    • v.37 no.2
    • /
    • pp.13-21
    • /
    • 2022
  • Objectives : The evaluation of academic achievement is pivotal for establishing accurate direction and adequate level of medical education. The purpose of this study was to firstly establish innovative item analysis technique of Item Response Theory (IRT) for analyzing multiple-choice test of herbology in the traditional Korean medicine education which has not been available for the difficulty of test theory and statistical calculation. Methods : The answers of 390 students (2012-2018) to the 14 item herbology test in college of Korean medicine were used for the item analysis. As for the multidimensional analysis of item characteristics, difficulty, discrimination, and guessing parameters along with item-total correlation and percentage of correct answer were calculated using Classical Test Theory (CTT) and IRT. Results : The validity parameters of strong and weak items were illustrated in multiple perspectives. There were 4 items with six acceptable index scores, and 5 items with only one acceptable index score. The item discrimination of IRT was found to have no significant correlation with difficulty and discrimination indices of CTT emphasizing attention of professionals of medical education as for the test credibility. Conclusion : The critical suggestions for the development, utilization and revision of test items in the e-learning and evidence-based Teaching era were made based on the results of item analysis using IRT. The current study would firstly provide foundation for upgrading the quality of Korean medicine education using test theory.

Development of a Teacher Rating Scale of Childcare Adaptation for Infants and Toddlers (교사용 영아 어린이집 적응 척도 개발)

  • Shin, Nary;Yun, Hyun Jeong
    • Korean Journal of Child Studies
    • /
    • v.37 no.6
    • /
    • pp.35-56
    • /
    • 2016
  • Objective: This study aimed to develop and validate the Childcare Adaptation Scale for Infants and Toddlers (CASIT), which is rated by teachers of Korean children. Methods: The participants consisted of 326 childcare teachers working with infants (ages 0-2 years). Content validity, discriminant validity, convergent validity, concurrent validity, internal consistency, inter-rater reliability, and item discrimination were examined using PASW 18.0 and AMOS 19.0. Results: The results of an exploratory factor analysis identified the 29-item scale and six dimensions of the scale, including group life adaptation, negative behaviors, positive affect, regular routines, activity/interest, and peer interaction. Convergent validity was examined via confirmatory factor analysis, average variation extracted (AVE), and construct reliability, and acceptable evidences of convergent validity was established. The scales were shown to be highly consistent internally and among raters. Also, the mean between the upper group and lower group of each item regarding item discrimination showed a significant difference. Conclusion: It was concluded that the CASIT, which is a quick and convenient tool for teachers to use, is a valid and reliable instrument.

A Preliminary Study for Development of the Aphasia Screening Test (실어증 선별검사 도구개발을 위한 예비연구)

  • Kim, Hyang-Hee;Lee, Hyun-Joung;Kim, Deog-Yong;Heo, Ji-Hoe;Kim, Yong-Wook
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.7-18
    • /
    • 2006
  • An aphasia screening test can serve a main purpose of differentiating aphasics from non-aphasic patients in a quick as well as efficient manner. As a preliminary study for developing a standardized aphasia screening test for Korean patients, we constructed an aphasia screening test constituting items from the Paradise' Korean version-the Western Aphasia Battery(P K-WAB). All test items were analyzed in order to extract items with optimal item discrimination and adequate item difficulty indices. From the results, we were able to select some items from each subtest with optimal results of discriminant function analysis for aphasic and normal control groups. It is expected, thus, that information on the item analysis could be utilized in developing a Korean aphasia screening test.

  • PDF

Analysis of the difficulty and discrimination of paper-based tests and computer-based tests according to item response theory: focusing on the National Dental Technician Examination (문항반응이론에 따른 지필 시험과 컴퓨터적용 시험의 난이도와 변별도 분석: 치과기공사 국가시험을 중심으로)

  • Hwang, Kyung-Sook
    • Journal of Technologic Dentistry
    • /
    • v.44 no.3
    • /
    • pp.104-110
    • /
    • 2022
  • Purpose: This study analyzes the difficulty and discrimination of the paper-based test (PBT) and the computer-based test (CBT) according to item response theory, focusing on the National Dental Technician Examination. Methods: A mock test was conducted from September 15 to 23, 2020, and the final 179 (1 out of 180 absentees)people were the subjects of this study. Both frequency analysis and factor analysis were performed. The collected data were analyzed using IBM SPSS Statistics ver. 18.0 (IBM) and jMetrik programs. The significance level was set to 0.05. Results: The difficulty of the mock test was more easily responded to in CBT. It was also predicted that the CBT could better measure the ability of test takers than the PBT could. Conclusion: The difficulty, discrimination, and reliability of the questions were not affected by the examination method through the mock test. The feasibility of a future change to the CBT was confirmed by the National Dental Technician Examination.

Test Analysis of the 「General Computer」 in College Scholastic Ability Test (대학수학능력시험 직업탐구영역의 「컴퓨터 일반」 교과 문항 분석)

  • Kim, JongHye;Kim, Yong;Kim, JaMee;Lee, WonGyu
    • The Journal of Korean Association of Computer Education
    • /
    • v.9 no.6
    • /
    • pp.11-18
    • /
    • 2006
  • The purpose of this paper is to draw problems from analyzing "General Computer" questions of Career Searching Section in the College Scholastic Ability Test in 2005 and 2006 and to offer some suggestions about them. For the qualitative research, this paper analyzed content validity. For the quantitative research, this paper analyzed item difficulty and item discrimination by using Bayesian 1.0 based on 2-parameter item response model and this paper analyzed item reliability and distracters by using Testan 1.0. By analyzing "General Computer" questions, this paper would like to improve the quality of items and estimate item difficulty. Therefore, "General Computer" questions could be suggested as materials for developing reliable and discriminative questions.

  • PDF

The Development of Assessment Scales for Day Care Programs (어린이집 프로그램 평가척도의 개발을 위한 예비연구)

  • Rhee, Unhai;Song, Hye Rin;Shin, Hye Young;Choi, Hye Yeong
    • Korean Journal of Child Studies
    • /
    • v.23 no.4
    • /
    • pp.199-213
    • /
    • 2002
  • This preliminary study aimed to develop 3 assessment scales for self-evaluation by day care directors and teachers. The development of major areas of evaluation and items as well as evaluation criteria for each item was based on the analysis of related research and major evaluation instruments. A panel of experts in early childhood education examined the contents. The 3 preliminary scales were administered in 87 day care centers; data were analyzed by item response distribution, item discrimination, and reliability of the scales. Items indicating low item discrimination were deleted and minor revisions were made to improve psychometric characteristics of each scale. The final version of the 3 scales is valid for use as self-evaluation instruments by day care directors and teachers.

  • PDF